id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
4d00427ba7f8c1a7c5239c24cc37f9d1ee7428ff
|
A general framework for blaming in component-based systems
Gregor Gössler, Daniel Le Métayer
To cite this version:
hal-01211484
HAL Id: hal-01211484
https://inria.hal.science/hal-01211484
Submitted on 5 Oct 2015
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A General Framework for Blaming in Component-Based Systems
Gregor Gössler∗ Daniel Le Métayer†
September 4, 2015
Abstract
In component-based safety-critical embedded systems it is crucial to determine the cause(s) of the violation of a safety property, be it to issue a precise alert, to steer the system into a safe state, or to determine liability of component providers. In this paper we present an approach to blame components based on a single execution trace violating a safety property $P$. The diagnosis relies on counterfactual reasoning (“what would have been the outcome if component $C$ had behaved correctly?”) to distinguish component failures that actually contributed to the outcome from failures that had little or no impact on the violation of $P$.
1 Introduction
In a concurrent, possibly embedded and distributed system, it is often crucial to determine which component(s) caused an observed failure. Understanding causality relationships between component failures and the violation of system-level properties can be especially useful to understand the occurrence of errors in execution traces, to allocate responsibilities, or to try to prevent errors (by limiting error propagation or the potential damages caused by an error).
The notion of causality inherently relies on a form of counterfactual reasoning: basically the goal is to try to answer questions such as “would event $e_2$ have occurred if $e_1$ had not occurred?” to decide if $e_1$ can be seen as a cause of $e_2$ (assuming that $e_1$ and $e_2$ have both occurred, or could both occur in a given context). For instance, we may want to determine whether the violation of a safety requirement of a cruise control system was caused by an observed buffer overflow in component $C_1$ or by an observed timing failure of $C_2$, or by the combination of both events. But this question is not as simple as it may look:
1. First, we have to define what could have happened if $e_1$ had not occurred, in other words what are the alternative worlds.
2. In general, the set of alternative worlds is not a singleton and it is possible that in some of these worlds $e_2$ would occur while in others $e_2$ would not occur.
3. We also have to make clear what we call an event and when two events in two different traces can be considered as similar. For example, if $e_1$ had not occurred, even if an event potentially corresponding to $e_2$ might have occurred, it would probably not have occurred.
∗INRIA Grenoble – Rhône-Alpes and Univ. Grenoble – Alpes, France
†INRIA Grenoble – Rhône-Alpes and Univ. Lyon, France
at the same time as $e_2$ in the original sequence of events; it could also possibly have occurred in a slightly different way (for example with different parameters, because of the potential effect of the occurrence of $e_1$ on the value of some variables).
Causality has been studied in many disciplines (philosophy, mathematical logic, physics, law, etc.) and from different points of view. In this paper, we are interested in causality for the analysis of execution traces in order to establish the origin of a system-level failure. The main trend in the use of causality in computer science consists in mapping the abstract notion of event in the general definition of causality proposed by Halpern and Pearl in their seminal contribution \[14\] to properties of execution traces. Halpern and Pearl’s model of causality relies on a counterfactual condition mitigated by subtle contingency properties to improve the accurateness of the definition and alleviate the limitations of the counterfactual reasoning in the occurrence of multiple causes. While Halpern and Pearl’s model is a very precious contribution to the analysis of the notion of causality, we believe that a fundamentally different approach considering traces as first-class citizens is required in the computer science context considered here: The model proposed by Halpern and Pearl is based on an abstract notion of event defined in terms of propositional variables and causal models expressed as sets of equations between these variables. The equations define the basic causality dependencies between variables (such as $F = L_1$ or $L_2$ if $F$ is a variable denoting the occurrence of a fire and $L_1$ and $L_2$ two lightning events that can cause the fire).
In order to apply this model to execution traces, it is necessary to map the abstract notion of event onto properties of execution traces. But these properties and their causality dependencies are not given \textit{a priori}, they should be derived from the system under study. In addition, a key feature of trace properties is the temporal ordering of events which is also intimately related to the idea of causality but is not an explicit notion in Halpern and Pearl’s framework (even if notions of time can be encoded within events). Even though this application is not impossible, as shown by \[2\], we believe that definitions in terms of execution traces are preferable because (a) in order to determine the responsibility of components for an observed outcome, component traces provide the relevant granularity, and (b) they can lead to more direct and operational definitions of causality.
As suggested above, many variants of causality have been proposed in the literature and used in different disciplines. It is questionable that one single definition of causality could fit all purposes. For example, when using causality relationships to establish liabilities, it may be useful to ask different questions, such as: “could event $e_2$ have occurred in some cases if $e_1$ had not occurred?” or “would event $e_2$ have occurred if $e_1$ had occurred but not $e_1$?” These questions correspond to different variants of causality which can be perfectly legitimate and useful in different situations. To address this need, we propose two definition of causality relationships that can express these kinds of variants, called \textit{necessary} and \textit{sufficient} causality.
The framework introduced here distinguishes a set of black-box components, each equipped with a specification. On a given execution trace, the causality of the components is analyzed with respect to the violation of a system-level property. In order to keep the definitions as simple as possible without losing generality — that is, applicability to various models of computation and communication —, we provide a language-based formalization of the framework. We believe that our general, trace-based definitions are unique features of our framework.
Traces can be obtained from an execution of the actual system, but also as counter-examples from testing or model-checking. For instance, we can model-check whether a behavioral model satisfies a property; causality on the counter-example can then be established against the component specifications.
This article extends the preliminary work of \[8\]. In particular, we have entirely replaced the characterization of temporal causality with the notion of unaffected prefixes (Section 5.1), which precisely distinguishes dependencies between events in the component traces on the semantic
level, and does not require the user to provide an information flow relation. In order to illustrate
the instantiation of our general formalization with a specific model of computation, we apply the
approach to a system whose components are specified in a simple synchronous language inspired
by Lustre [13].
The remainder of the article is organized as follows. In the next section we discuss some
fundamental issues in defining causality, and define variants of causality. In Sections 3 and 4
we introduce our language-based modeling framework and a running example. In Section 5 we
formalize necessary and sufficient causality and establish some fundamental properties. Section 6
shows how the framework can be instantiated to blame components in a data-flow model à la
Lustre. Section 7 compares our approach with related work, and Section 8 concludes.
2 Setting the Stage: Variants of Causality
Causality is a powerful but also very subtle notion, with many variants and interpretations
depending on the discipline, application domain and context of use. As an illustration, legal
systems introduce distinctions between actual causes, factual causes, intervening causes, inter-
vening efficient causes, remote causes, necessary causes, probable causes, unforeseeable causes,
concurrent causes, etc. This complexity is inherent to the concept of causality itself because it
relies on assumptions or analyses of hypothetical actions or courses of events. Before starting
the presentation of our formal framework in the next section, we first provide in this section a
high-level and informal outline of a range of options for the interpretation of causality in the
context of computer science.
As mentioned in the Introduction, we are interested in causality as a criterion to identify the
component responsible (in a technical sense) for a failure of the system, or, more generally, for
the occurrence of a given event. We assume that the minimum amount of available information
to conduct the causality analysis is a set \( L \) of logs \( L_i \) containing the sequence of events observed
for each component \( C_i \) of the system, a specification \( S_i \) for each component and a global property
\( P \) such that \( \bigwedge_{i \in [1,n]} S_i \Rightarrow P \). The set of logs \( L \) is assumed to be faulty (i.e. not to be consistent
with the required property \( P \)). The next sections show how these notions can be expressed
formally in terms of signatures and traces.
In the same way as in civil law, two conditions have to be met to declare a component \( C_i \)
responsible for a given (undesired) event: its behavior \(^1\) must have been faulty and this fault
must be the (or a) cause of the event. The first condition implies that a model of the expected
(correct) behavior of the component must be available; we call this model the specification of
the component in the sequel. The second condition naturally leads to another question:
What would have been the course of events if \( C_i \) had behaved correctly?
But this question is very difficult to answer because it depends on many parameters that may
be or may not be available for the analysis. A key parameter is the assumptions on the actual
behaviors of the components \( C_i \). Depending on the context, different types of information can
be available:
- In some cases, no information at all is available on any component, which requires a “black
box” analysis.
- In other cases, the code of each component is available and the assumption can be made
that this code is actually the code that has been executed to produce the log (which is not
necessarily obvious). This leads to what is sometimes called “white box” analysis.
\(^1\)In the sequence of events leading to the undesired event.
In yet other cases, partial information may be available, for example the code of certain components, or assumptions on the sequences of events that can or cannot be produced by a component.
In the sequel, we use $BH_i$ to denote the assumption on the behavior of $C_i$: for example $BH_i$ can be the model of the actual code in a white box analysis or the set of all potential behaviors in a black box analysis.
Another type of assumption that must be made explicit to reason about alternative behaviors, in order to answer the question above, concerns the consistency between individual logs, for example the fact that a message cannot be received by a component if it has not first been sent by another component. We call this assumption the behavioral model $B$ in this paper.
Starting from this set of parameters, the general structure of a causality analysis can be pictured as follows:
$$\text{Observed logs } L_i \rightarrow \text{Potential behaviors } Bh_i \downarrow \text{Hypothetical logs } L'_i \leftarrow \text{Hypothetical behaviors } Bh'_i$$
The potential behaviors $Bh_i \in BH_i$ are the behaviors of the components that are consistent with the observed logs $L_i$; the hypothetical behaviors $Bh'_i \in BH_i$ are modifications of behaviors $Bh_i$ in which certain erroneous behaviors are replaced by correct behaviors; and the hypothetical logs are the logs produced by the execution of the hypothetical components. The causality analysis consists in performing these three steps and then checking whether the hypothetical logs $L'_i$ meet the property $P$.
This high-level picture shows that the analysis goes from logs to behaviors and back to logs: it starts from logs, tries to infer the behaviors that can have produced these logs, considers variants of these behaviors and comes back to the logs corresponding to these hypothetical behaviors. Looking at it more closely, we can see that each step in the above figure actually gives rise to a range of options:
- The first step can be interpreted as a universal or an existential quantification. In other words, we may want to consider all behaviors consistent with the observed log or just require the existence of a behavior. Universal quantification leads to notions of “strong” causality and existential quantification to “weak causality” (or potential causality).
- In the second step, different choices are possible for the components whose behavior is modified: for example, if we are interested in the responsibility of a given component $C_i$, we may replace the behavior of $C_i$ by a correct behavior or replace the behavior of all components but $C_i$ by a correct behavior. As explained below, these choices lead to two classes of causality called necessary and sufficient causality respectively.
- Just like the first step, the third step can be interpreted as a universal or an existential quantification: we may want to consider all hypothetical logs obtained from the hypothetical components or just consider the existence of hypothetical logs meeting (or not meeting) the property $P$. This choice has an impact on the treatment of non-determinism in the execution of the components.
\(^2\)We call alternative behaviors the other possible behaviors of the components in the counterfactual reasoning, which typically involves the replacement of the behavior of a component by a correct behavior.
\(^3\)Typically the behaviors of the components which are suspected of being the causes of the failure of the system.
\(^4\)In which case, causality will be established.
The combination of the above choices leads to eight possible forms of causality which can be noted Necessary\textsuperscript{∀, ∀}, Necessary\textsuperscript{∃, ∃}, Necessary\textsuperscript{∃, ∀}, ..., Sufficient\textsuperscript{∃, ∀}, ... . For example, Necessary\textsuperscript{∀, ∀}(for one component $C_i$) corresponds to the following informal definition:
_Considering the evidence provided by the set of logs $L$, Component $C_i$ is a Necessary\textsuperscript{∀, ∀} cause for the failure of the system if for all potential behaviors $Bh$ of the system consistent with $L$, all behaviors $Bh'$ similar to $Bh$ except for the behavior of $C_i$ which is made correct, lead to correct execution logs._
The next sections provide a formal model of the intuitions introduced here. In the rest of this paper, we do not make any assumption on potential behaviors (black box analysis), we consider both necessary and sufficient causality, and focus on strong forms of causality.
### 3 Modeling Framework
In order to focus on the fundamental issues in defining causality on execution traces we introduce a simple, language-based modeling framework.
**Definition 1 (Prefix $\sqsubseteq$, $\cap$, $\sqcup$)** A finite word $w'$ is a prefix of $w$, written $w' \sqsubseteq w$, if there exists a word $w''$ such that $w = w' \cdot w''$, where $\cdot$ stands for concatenation. Let $\epsilon$ denote the empty word. For two words $w_1$ and $w_2$ let $w_1 \cap w_2$ be their longest common prefix. For a set $P$ of prefixes of a given word let $\sqcap P$ and $\sqcup P$ denote the infimal and the supremal element of $P$ with respect to $\sqsubseteq$, respectively.
A language $L$ is upward-closed if $(L, \sqsubseteq)$ is a complete partial order, that is, if for any ascending chain of words $w_1 \subseteq w_2 \subseteq ...$ in $L$, $\sqcup_i w_i \in L$.
**Definition 2 (Component signature)** A component signature $C_i$ is a tuple $(\Sigma_i, S_i)$ where $\Sigma_i$ is an alphabet of component actions and $S_i \subseteq \Sigma_i^*$ is a prefix-closed and upward-closed language over $\Sigma_i$ called specification.
The component signature is the abstraction of an actual component. $\Sigma_i$ is the alphabet of actions the actual component may produce, whereas the alphabet actually used in $S_i$ may be a subset of $\Sigma_i$. Prefix closure means that $S_i$ is a safety specification, while upward closure ensures the least upper bound of any ascending chain to be included in the specification.
Similarly, a system signature is the abstraction of a system composed of a set of interacting components.
**Definition 3 (System signature)** A system signature is a tuple $(C, \Sigma, B)$ where
- $C = \{C_1, ..., C_n\}$ is a finite set of component signatures $C_i = (\Sigma_i, S_i)$ with pairwise disjoint alphabets;
- $\Sigma \subseteq \Sigma_1 \times ... \times \Sigma_n$ is a system alphabet with $\Sigma_i' = \Sigma_i \cup \{\emptyset\}$ where an interaction $\alpha = (a_1, ..., a_n) \in \Sigma$ is a tuple of simultaneous actions, and $a_i = \emptyset$ means that component $C_i$ does not participate in $\alpha$;
- $B \subseteq \Sigma^* \cup \Sigma^\omega$ is a prefix-closed and upward-closed behavioral model.
The behavioral model $B$ is used to express assumptions and constraints on the possible (correct and incorrect) behaviors. For instance, in a model of components communicating by asynchronous message passing, $B$ may be used to express the fact that a message cannot be received before it has been sent; in a real-time model it may be used to express the hypothesis that time progresses uniformly for all components.
Notations} Given a word \( w = \alpha_1 \cdot \alpha_2 \cdots \in \Sigma^* \) and an index \( i \in \mathbb{N} \) let \( w[i] = \alpha_i \). For \( \alpha = (\alpha_1, \ldots, \alpha_n) \in \Sigma \) let \( \alpha[k] = \alpha_k \) denote the action of component \( k \) in \( \alpha \) (\( \alpha_k = \emptyset \) if \( k \) does not participate in \( \alpha \)); for \( w = \alpha_1 \cdot \alpha_2 \in \Sigma^* \) and \( i \in \{1, \ldots, n\} \) let \( \pi_i(w) \) be the word obtained by removing all \( \emptyset \) letters from \( \alpha_1[i] \cdots \alpha_k[i] \).
For the sake of compactness of notations we define composition \( \parallel : \Sigma_1^* \times \cdots \times \Sigma_n^* \rightarrow \Sigma^* \) such that \( w_1 \| \ldots \| w_n = \{ w \in \Sigma^* \mid \forall i = 1, \ldots, n : \pi_i(w) = w_i \} \), and extend \( \parallel \) to tuples of languages.
### 3.1 Logs
A (possibly faulty) execution of a system may not be fully observable; therefore we base our analysis on \emph{logs}. A log of a system \( S = (C, \Sigma, B) \) with components \( C = \{C_1, \ldots, C_n\} \) of alphabets \( \Sigma_i \) is a vector \( \bar{t}r = (tr_1, \ldots, tr_n) \in \Sigma_1^* \times \cdots \times \Sigma_n^* \) of component traces (i.e., words over the component alphabets) such that there exists a system-level trace (i.e., a word over the system alphabet) \( tr \in B \) with \( \forall i = 1, \ldots, n : tr_i = \pi_i(tr) \). A log \( \bar{t}r \) is thus the projection of a system-level trace \( tr \). This relation between an actual execution and the log on which causality analysis will be performed allows us to model the fact that only a partial order between the events (i.e., occurrences of component actions) in \( tr \) may be observable rather than their exact precedence. Similarly, the component specifications may ignore part of the logged events. Let \( L(S) \) denote the set of logs of \( S \). Given a log \( \bar{t}r = (tr_1, \ldots, tr_n) \in L(S) \) let \( tr^3 = \{ tr \in B \mid \forall i = 1, \ldots, n : \pi_i(tr) = tr_i \} \) be the set of behaviors resulting in \( \bar{t}r \).
#### Definition 4 (Consistent specification)
A consistently specified system is a tuple \( (S, \mathcal{P}) \) where \( S = (C, \Sigma, B) \) is a system signature with \( C = \{C_1, \ldots, C_n\} \) and \( C_i = (\Sigma_i, S_i) \), and \( \mathcal{P} \subseteq B \) is a safety property such that for all traces \( tr \in B \),
\[
(\forall i = 1, \ldots, n : \pi_i(tr) \in S_i) \implies tr \in \mathcal{P}
\]
Under a consistent specification, property \( \mathcal{P} \) may be violated only if at least one of the components violates its specification. Throughout this paper we focus on consistent specifications.
### 4 Motivating Example
Consider a database system consisting of three components communicating by message passing over point-to-point FIFO buffers. Component \( C_1 \) is a client, \( C_2 \) the database server, and \( C_3 \) is a journaling system. The specifications of the three components are as follows:
\( S_1 \): sends a lock request lock to \( C_2 \), followed by a request \( m \) to modify the locked data.
\( S_2 \): receives a write request \( m \), possibly preceded by a lock request lock. Access control is optimistic in the sense that the server accepts write requests without checking whether a lock request has been received before; however, in case of a missing lock request, a policy violation may be detected later on and signaled by an event \( x \). After the write, a message \( \text{journ} \) is sent to \( C_3 \).
\( S_3 \): keeps receiving \( \text{journ} \) messages from \( C_2 \) for journaling, and acknowledges them with \( \text{ok} \).
The system is modeled by the system signature \( (C, \Sigma, B) \) where \( C = \{C_1, C_2, C_3\} \) with component signatures \( C_i = (\Sigma_i, S_i) \), and
\( ^5 \)It is straightforward to allow for additional information in traces \( tr \in B \) that is not observable in the log, by adding to the cartesian product of \( \Sigma \) another alphabet that does not appear in the projections. For instance, events may be recorded with some timing uncertainty rather than precise time stamps \[27\].
Based on the above definitions, we observe that our definition of
\[ P = \Sigma^* \cup \Sigma_{ok} \] with \( \Sigma_{ok} = \Sigma \backslash \{ (\varnothing, x, \varnothing) \} \)
modeling the absence of a conflict event \( x \). It can be seen that if all three components satisfy their specifications, \( x \) will not occur.
Figure 1 shows the log \( \vec{t} = (tr_1, tr_2, tr_3) \). In the log, \( tr_1 \) violates \( S_1 \) at event \( a \) and \( tr_3 \) violates \( S_3 \) at \( b \). The dashed lines between \( m! \) and \( m? \), and between \( journ! \) and \( journ? \) stand for communications.

In order to analyze which component(s) caused the violation of \( P \) we can use an approach based on \textit{counterfactual reasoning}. Informally speaking,
- \( C_i \) is a \textit{necessary cause} for the violation of \( P \) if in all executions where \( C_i \) behaves correctly and all other components behave as observed, \( P \) is satisfied.
- Conversely, \( C_i \) is a \textit{sufficient cause} for the violation of \( P \) if in all executions where all incorrect traces of components other than \( C_i \) are replaced with correct traces, and the remaining traces (i.e., correct traces and the trace of \( C_i \)) are as observed, \( P \) is still violated.
Applying these criteria to our example we obtain the following results:
If \( C_1 \) had worked correctly, it would have produced the trace \( tr'_1 = lock! \cdot m! \). This gives us the counterfactual scenario consisting of the traces \( tr'' = (tr'_1, tr_2, tr_3) \). However, this scenario is not consistent as \( C_1 \) now emits lock, which is not received by \( C_2 \) in \( tr_2 \). According to \( B \), the FIFO buffers are not lossy, such that lock would have been received before \( m \) if it had been sent before \( m \). By vacuity (as no execution yielding the traces \( tr'' \) exists), \( C_1 \) is a necessary cause and \( C_3 \) is a sufficient cause according to our definitions above. While the first result matches our intuition, the second result is not what we would expect. As far as \( C_2 \) is concerned, it is not a cause since its trace satisfies \( S_2 \).
Why do the above definitions fail to capture causality? It turns out that our definition of counterfactual scenarios is too narrow, as we substitute the behavior of one component (e.g., \( tr_1 \)
\[ for the sake of readability we omit the prefix closure of the specifications in the examples.\]
to analyze sufficient causality of \( C_i \) without taking into account the impact of the new trace on the remainder of the system. When analyzing causality “by hand”, one would try to evaluate the effect of the altered behavior of the first component on the other components. This is what we will formalize in the next section.
5 Causality Analysis
In this section we improve our definition of causality of component traces for the violation of a system-level property. We suppose the following inputs to be given:
- A system signature \((C, \Sigma, B)\) with component signatures \( C_i = (C_i, \Sigma_i) \).
- A log \( \vec{tr} = (tr_1, ..., tr_n) \). In the case where the behavior of two or more components is logged into a common trace, the trace of each component can be obtained by projection.
- A safety property \( P \subseteq B \) such that \((S, P)\) is consistently specified.
- A set \( I \subseteq \{1, ..., n\} \) of component indices, indicating the set of components to be jointly analyzed for causality. Being able to reason about group causality is useful, for instance, to determine liability of component providers that are responsible for more than one component.
5.1 Unaffected Prefixes
Intuitively, in order to verify whether the violations of \( S_i \) by \( tr_i \), \( i \in I \), are a cause for the violation of \( P \) in \( \vec{tr} \), we have to identify and remove the effect of these component failures on \( \vec{tr} \), replace it with behaviors that are consistent with a correct execution of the components in \( I \), and verify whether all obtained counterfactual traces satisfy \( P \). In order to determine and eliminate the impact of component failures on the traces of the remaining components, we compute the set of prefixes that are unaffected by the failures. This approach has the advantage of analyzing the propagation of failures on the semantic level, in contrast to the less precise approach of [8] where the impact of component failures on other components is over-approximated using a worst-case information flow relation between component actions.
**Definition 5 (Critical prefix \( cp \))**
Given a trace \( tr = \alpha_1 \alpha_2 \cdots \) over \( \Sigma \) and a language \( S \) over \( \Sigma \), let \( cp(tr, S) = \bigcup \{tr' \mid tr' \sqsubseteq tr \land tr' \in S\} \) be the critical prefix of \( tr \) with respect to \( S \).
\( cp(tr, S) \) is the supremum of all prefixes of \( tr \) that satisfy \( S \). Since by definition the component specifications \( S_i \) are upward-closed, \( cp(tr, S) \in S \).
**Definition 6 (Trace extension \( extend \))**
Given a vector \( \vec{S} \) of specifications, let
\[
\text{extend}_i(tr^0, tr) = \begin{cases}
\{tr' \in S_i \mid tr \sqsubseteq tr'\} & \text{if } tr \neq tr^0 \land tr \in S_i \\
\{tr\} & \text{otherwise}
\end{cases}
\]
The definition of trace extension plays a pivotal role in our definitions of causality.
Before formalizing the notion of unaffected prefixes, we need the following auxiliary definition.
**Definition 7 (Least constraining components \( lcc \))**
Consider a language \( B \) over \( \Sigma \) and a log \( \vec{tr} \). For a vector of traces \( tr \) over \( \Sigma \), \( w \in \Sigma^* \), and \( \alpha \in \Sigma \) let
\[
\text{cons}(\vec{tr}, w, \alpha) = \{i \mid \pi_i(w \cdot \alpha) \sqsubseteq \text{extend}_i(tr^0_i, tr_i)\}
\]
be the indices of components whose extension of \( t_r_i \) is consistent with \( w \cdot \alpha \). Let
\[
\text{lcc}(\vec{t}, L) = \cup \{ \text{cons}(\vec{t}, w, \alpha) \mid w \in L \land \alpha \in \Sigma \land \text{ok}(w, \alpha) \}
\]
be the set of component indices that are least constraining the set of symbols with which the words of \( L \) may be extended, where \( \cup S \) is the greatest element of \( S \) with respect to set inclusion, or \( \emptyset \) if no greatest element exists, and
\[
\text{ok}(w, \alpha) = w \cdot \alpha \in B \land \bigwedge_i (t_r_i \subseteq \pi_i(w) \implies \pi_i(w \cdot \alpha) \subseteq \text{extend}_i(t_r_i^0, t_r_i)).
\]
Thus, consistency is checked only over traces in \( B \) whose projections are either shorter than \( t_r_i \), or prefixes of the extensions.
**Definition 8 (Unaffected prefixes UP)** Given vectors \( \vec{t} \) of traces and \( \vec{S} \) of specifications, and an index set \( I \), we define the unaffected prefixes of \( \vec{t} \) as follows. Let
\[
\vec{t}_r^1 = \begin{cases}
\text{cp}(t_r_i, S_i) & \text{if } i \in I \\
\text{tr}_i & \text{otherwise}
\end{cases}
\]
and \( \forall i = 1, \ldots, n \) \( \forall j \geq 1 \):
\[
\vec{t}_r^{j+1} = \begin{cases}
\text{tr}^j_i & \text{if } i \in \text{lcc}(\vec{t}_r^j, L^j) \\
\bigcup \{ \text{tr}^j_i \cap \pi_i(w) \mid w \in L^j \} & \text{otherwise}
\end{cases}
\]
where \( L^j = \sup \{ w \in B \mid \forall i \exists t_r' \in \text{extend}(t_r_i, t_r_i^j) : \pi_i(w) \subseteq t_r' \} \).
Let \( \text{UP}_{\vec{S}}(\vec{t}, I) = (\vec{t}_r^1, \ldots, \vec{t}_r^n) \) with \( \vec{t}_r^1 = \prod_i \text{tr}^1_i \), \( i = 1, \ldots, n \), be the vector of prefixes of \( \vec{t} \) that are unaffected by the failures of components in \( I \).
Intuitively, the vector of unaffected prefixes is computed by first removing the incorrect suffixes from \( t_r_i \), \( i \in I \), and then computing, for each component \( i \), a decreasing sequence of prefixes \( t_r_i^j \) until a fixpoint is reached. In each iteration we trim, for the set of components whose current prefixes constrain the possible extensions, the prefix to the longest trace that is the projection of some word in \( L^j \) (that is, on which all extended prefixes agree). The unaffected prefixes \( (t_r^1, \ldots, t_r^n) \) to which the sequence converges, are the maximal prefixes that could also have been observed if all components in \( I \) had behaved correctly, whereas the suffixes \( (s_1, \ldots, s_n) \) — such that \( t_r_i = t_r_i^* \cdot s_i \) — are impacted by the failures of components in \( I \). In the terminology of \( \vec{S} \), the suffixes \( (s_1, \ldots, s_n) \) define the cone of influence spanned by the failures of components in \( I \).
**Example 1** Coming back to the example of Section 4, the unaffected prefixes \( \text{UP}_{\vec{S}}(\vec{t}, \{\text{client}\}) \) of the database example of Figure 4 with respect to client are \( (e, c, b \cdot \text{journ} \cdot \text{ok}) \). The unaffected prefixes with respect to journal are \( \text{UP}_{\vec{S}}(\vec{t}, \{\text{journal}\}) = (m!, m? \cdot \text{journ} \cdot \text{ok} \cdot x, e) \).
### 5.2 Counterfactuals
Using the unaffected prefixes defined above we are able to define, for a given log \( \vec{t} \) and set of component indices \( I \), the set of counterfactual traces modeling alternative worlds in which the failures of components in \( I \) do not happen, and the unaffected prefixes of the remaining components are as observed in \( \vec{t} \).
Definition 9 (Counterfactuals $C$) Given vectors $\vec{t}$ of traces and $\vec{S}$ of specifications, and an index set $I$, let
$$C_S(\vec{t}, I) = \{ w \in B \mid \forall i : \pi_i(w) \in \text{extend}_i(tr_i, tr^*_i) \}$$
where $(tr^*_1, \ldots, tr^*_n) = \text{UP}_S(\vec{t}, I)$, be the set of counterfactuals to $\vec{t}$.
The set of counterfactuals is the set of system-level traces whose projections on the components extend the unaffected prefixes with correct behaviors. Incorrect prefixes and prefixes that amount to the whole observed trace, are not extended.
The rationale behind Definition 9 is to compute the set of alternative worlds where the failures of components in $I$ do not occur. To this end we have to prune out their possible impact on the logged behavior, and substitute with correct behaviors. Prefixes violating their specifications, and unaffected prefixes that are equal to the observed component traces, are not extended since we want to determine causes for system-level failures observed in the log, rather than exhibiting causality chains that are not complete yet and whose consequence would have shown only in the future.
Example 2 The set of counterfactuals $C_S(\vec{t}, \{\text{client}\})$ with respect to the failure of client in our running example is computed as follows (where we use the subscripts c, db, and j for client, database, and journal, respectively):
$$C_S(\vec{t}, \{\text{client}\}) = \{ w \in B \mid \pi_c(w) \in \text{extend}_c(tr_c, \epsilon) \land \pi_{db}(w) \in \text{extend}_{db}(tr_{db}, \epsilon) \land \pi_j(w) \in \text{extend}_j(tr_j, b \cdot \text{journ?} \cdot \text{ok!}) \}$$
$$= \{ w \in B \mid \pi_c(w) \in S_c \land \pi_{db}(w) \in S_{db} \land \pi_j(w) = b \cdot \text{journ?} \cdot \text{ok!} \}$$
The projections of the counterfactual traces on the three components are shown in Figure 2. Figure 2(b) shows the unique counterfactual scenario with respect to the failure of journal.
Figure 2: The counterfactual scenarios with respect to the failure of (a) client and (b) journal. Extensions of the unaffected prefixes are blue.
5.3 Logical Causality and Blaming
We are now ready to formally define two variants of causality in our framework, namely, necessary and sufficient causality.
**Definition 10 (Necessary cause)** Given
- a consistently specified system \((S, \mathcal{P})\) with \(S = (C, \Sigma, B)\), \(C = \{C_1, ..., C_n\}\), and \(C_i = (\Sigma_i, \mathcal{S}_i)\),
- a log \(\vec{t}\) \(\in \mathcal{L}(S)\) such that \(\vec{t}^\uparrow \cap \mathcal{P} = \emptyset\), and
- an index set \(\mathcal{I}\),
the incorrect suffixes of the traces indexed by \(\mathcal{I}\) are a necessary cause of the violation of \(\mathcal{P}\) by \(\vec{t}\) if
\[
\mathcal{C}_{\mathcal{S}}(\vec{t}, \mathcal{I}) \subseteq \mathcal{P}
\]
That is, the set of logs indexed by \(\mathcal{I}\) is a necessary cause for the violation of \(\mathcal{P}\) if in the observed behavior where the unaffected prefixes are extended with correct behaviors, \(\mathcal{P}\) is satisfied. In other words, if all components had behaved as in the unaffected prefixes, and the components in \(\mathcal{I}\) had satisfied their specifications, then \(\mathcal{P}\) would have been satisfied.
**Example 3** Coming back to our running example, we have \(\mathcal{C}_{\mathcal{S}}(\vec{t}, \{\text{client}\}) \subseteq \mathcal{P}\). According to Definition 10, the failure of client is a necessary cause for the violation of \(\mathcal{P}\). Since the only element of \(\mathcal{C}_{\mathcal{S}}(\vec{t}, \{\text{journal}\})\) violates \(\mathcal{P}\), the failure of journal is not a necessary cause for the violation of \(\mathcal{P}\).
The definition of sufficient causality is dual to necessary causality, where in the alternative worlds we remove the failures of components not in \(\mathcal{I}\) and verify whether \(\mathcal{P}\) is still violated.
**Definition 11 (Sufficient cause)** Given
- a consistently specified system \((S, \mathcal{P})\) with \(S = (C, \Sigma, B)\), \(C = \{C_1, ..., C_n\}\), and \(C_i = (\Sigma_i, \mathcal{S}_i)\),
- a log \(\vec{t}\) \(\in \mathcal{L}(S)\) with \(\vec{t}^\uparrow \cap \mathcal{P} = \emptyset\), and
- an index set \(\mathcal{I}\),
let \(\overline{\mathcal{I}} = \{1, ..., n\} \setminus \mathcal{I}\). The set of traces indexed by \(\mathcal{I}\) is a sufficient cause for the violation of \(\mathcal{P}\) by \(\vec{t}\) if
\[
(\sup \mathcal{C}_{\mathcal{S}}(\vec{t}, \overline{\mathcal{I}})) \cap \mathcal{P} = \emptyset
\]
That is, the set of logs indexed by \(\mathcal{I}\) is a sufficient cause for the violation of \(\mathcal{P}\) if in the observed behavior where the incorrect suffixes of the components in the complement of \(\mathcal{I}\) is replaced with a correct behavior, the violation of \(\mathcal{P}\) is inevitable (even though \(\mathcal{P}\) may still be satisfied for non-maximal counterfactual traces). In other words, even if the components in the complement \(\overline{\mathcal{I}}\) of \(\mathcal{I}\) had satisfied their specifications and no component had failed in the cone of influence spanned by the failures of \(\overline{\mathcal{I}}\), then \(\mathcal{P}\) would still have been violated.
In Definitions 10 and 11 the analysis of (in)dependence between component behaviors — represented by the unaffected prefixes — helps in constructing alternative scenarios in \(B\) where the components indexed by \(\mathcal{I}\) (resp. \(\overline{\mathcal{I}}\)) behave correctly while keeping the behaviors of all other components close to their observed behaviors.
Example 4 In our database example we have \( \sup C_S(\vec{r}, \{journal\}) \cap \mathcal{P} = \emptyset \). By Definition 11, the failure of client is a sufficient cause for the violation of \( \mathcal{P} \) since \( \mathcal{P} \) is still violated in the counterfactual scenario. On the other hand, \( \sup C_S(\vec{r}, \{client\}) \cap \mathcal{P} \neq \emptyset \), thus the failure of journal is not a sufficient cause for the violation of \( \mathcal{P} \).
5.4 Properties
Necessary causality is a safety property whereas checking sufficient causality amounts to verifying a liveness property on the counterfactual language. The following results show that our analysis does not blame any set of innocent components, and that it finds a necessary and a sufficient cause for every system-level failure.
Theorem 1 (Soundness) Each cause contains an incorrect trace.
Proof 1 Consider a set \( I \subseteq \{i | tr_i \in S_i\} \) and a log \( \vec{r} = (tr_1, \ldots, tr_n) \). We show that the set of traces indexed by \( I \) is not a necessary, nor sufficient cause for the violation of \( \mathcal{P} \) in \( \vec{r} \). If all components in \( I \) satisfy their specifications, then \( (tr_1^*, \ldots, tr_n^*) = \cup S_i(\vec{r}, I) = \vec{r} \). By the hypothesis of Definition 10 there exists \( tr \notin S \) such that \( tr \notin \mathcal{P} \land \forall i : \pi_i(tr) = tr_i = tr_i^* \). Thus \( I \) is not a necessary cause according to Definition 10.
For sufficient causality, counterfactuals are computed by extending the unaffected prefixes \( tr_i^* = \cup S_i(\vec{r}, I) \). If all components in \( I \) satisfy their specifications, then \( C_S(\vec{r}, I) \subseteq \mathcal{P} \) since \( (S, \mathcal{P}) \) is a consistently specified system. Moreover, since the unaffected prefixes allow by construction for a common system-level trace whose projections extend them, there exists a system-level trace \( tr' \in S \) such that \( tr' \in C_S(\vec{r}, I) \). Thus, \( I \) is not a sufficient cause according to Definition 11.
Theorem 2 (Completeness) Each violation of \( \mathcal{P} \) has a necessary and a sufficient cause.
Proof 2 Let \( \vec{r} = (tr_1, \ldots, tr_n) \) and \( I = \{i | tr_i \notin S_i\} \). Due to the duality of necessary and sufficient causality, the proof of completeness for necessary (resp. sufficient) causality is similar to the proof of soundness for sufficient (resp. necessary) causality:
For necessary causality, the vector of unaffected prefixes is \( (tr_1^*, \ldots, tr_n^*) = \cup S_i(\vec{r}, I) \). By construction of \( tr_i^0 \) — and thus of \( tr_i^* \) — all traces in \( tr_i^* \) are prefixes of the traces in \( tr \) and satisfy the component specifications. Since \( (S, \mathcal{P}) \) is consistently specified and hence, \( \|_{i=1}^n S_i \) satisfies \( \mathcal{P} \), and \( \mathcal{P} \) is prefix-closed, all traces satisfying the condition of Definition 10 also satisfy \( \mathcal{P} \). Hence, \( I \) is a necessary cause for the violation of \( \mathcal{P} \) in \( \vec{r} \).
For sufficient causality, let \( tr_i^* = \cup S_i(\vec{r}, I) \). By the choice of \( I \), \( tr_i^* = tr_i \). We thus have \( C_S(\vec{r}, I) = tr_i^1 \), thus \( C_S(\vec{r}, I) \cap \mathcal{P} = \emptyset \). It follows that \( I \) is a sufficient cause for the violation of \( \mathcal{P} \) in \( \vec{r} \).
6 Application to Synchronous Data Flow
In this section we use the general framework to model a synchronous data flow example. Consider a simple filter that propagates, at each clock tick, the input when it is stable in the sense that it has not changed since the last tick, and holds the output when the input is unstable. Using
Lustre-like syntax the filter can be written as follows:
\[
\begin{align*}
\text{change} &= \text{false} \rightarrow \text{in} \neq \text{pre}(\text{in}) \\
\text{out} &= \begin{cases}
\text{in} & \text{if } \neg \text{change} \\
\text{h} & \text{otherwise}
\end{cases}
\end{align*}
\]
That is, component \text{change} is initially \text{false}, and subsequently \text{true} if and only if the input \text{in} has changed between the last and the current tick. \text{h} latches the previous value of \text{out}; its value is \bot \text{ ("undefined") at the first instant. } \text{out} \text{ is equal to the input if } \text{change} \text{ is false, and equal to } \text{h} \text{ otherwise. Thus, each signal consists of an infinite sequence of values, e.g., } \text{change} = \langle \text{change}_1, \text{change}_2, \ldots \rangle.
Figure 3 visualizes the architecture and signal names. We formalize the system as follows.
- \Sigma_\text{in} = \mathbb{R} \times \mathbb{B} \times \mathbb{N} \text{ where the first two components stand for the current value of the input to } \text{change} \text{ and the output from } \text{change} \text{ and the third component is the index of the clock tick. Similarly, let } \Sigma_\text{h} = \mathbb{R} \times (\mathbb{R} \cup \{\bot\}) \times \mathbb{N} \text{ and } \Sigma_\text{out} = \mathbb{R}^2 \times \mathbb{B} \times \mathbb{R} \times \mathbb{N}. \text{ In particular, for component } \text{h} \text{ the tuple only encompasses the input at the } \text{previous instant, which will allow us to log the values on which the specified current output depends.}
- \mathcal{S}_\text{in} = \{(r_1, r_2, \ldots) \in \Sigma_\text{in}^* \mid r_i = (\text{in}_i, \text{change}_i, i) \land \text{change}_1 = \text{false} \land (i \geq 2 \implies \text{change}_i = \text{in}_{i-1} \neq \text{in}_i)\} \text{ is the specification of } \text{change}. \text{ Similarly, } \mathcal{S}_\text{h} = \{(r_1, r_2, \ldots) \in \Sigma_\text{h}^* \mid r_i = (\text{out}_{i-1}, \text{h}_i, i) \land (i \geq 2 \implies \text{h}_i = \text{out}_i)\} \text{ and }
\mathcal{S}_\text{out} = \{(r_1, r_2, \ldots) \in \Sigma_\text{out}^* \mid r_i = (\text{in}_i, \text{h}_i, \text{change}_i, \text{out}_i, i) \land \text{out}_i = \begin{cases}
\text{in}_i & \text{if } \neg \text{change}_i \\
\text{h}_i & \text{otherwise}
\end{cases}
\}
- \Sigma = \{(r_\text{ch}, r_\text{h}, r_\text{out}) \in \Sigma_\text{ch} \times \Sigma_\text{h} \times \Sigma_\text{out} \mid r_\text{ch} = (\ldots, i_1) \land r_\text{h} = (\ldots, i_2) \land r_\text{out} = (\ldots, \ldots, \ldots, i_3) \mid i_1 = i_2 = i_3\} \text{ is the system alphabet (where all components react synchronously).}
- \mathcal{B} = \{(r_1, r_2, \ldots) \in \Sigma^* \cup \Sigma^* \mid \forall i : r_i = (\text{in}_{i-1}^\text{ch}, \text{change}_i, i_1), (\text{out}_i^\text{h}, \text{h}_i, i_2), (\text{out}_{i-1}^\text{out}, \text{h}_i^\text{out}, \text{out}_i, i_3)\land \text{in}_i^\text{ch} = \text{in}_i^\text{out} \land \text{change}_i = \text{h}_i^\text{out} \land \text{out}_i \land \text{h}_i = \text{h}_i^\text{out}\} \text{ is the set of possible behaviors, meaning that connected flows are equal.}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
in_i & change: (in_i, change_i) & 0 & (0, false) & 0 & (3, true) & 2 & (2, true) & 2 & (2, false) \\
\hline
h: (out_{i-1}, h_i) & (\bot, \bot) & (0,0) & (0,0) & (3,0) & (2,0) \\
\hline
out: (in_i, change_i, h_i, out_i) & (0, false, \bot, 0) & (0, false, 0, 0) & (3, true, 0, 0) & (2, true, 0, 0) & (2, false, 0, 2) \\
\hline
\end{tabular}
Figure 4: A correct log of the filter.
- $\mathcal{P} = \{ (r_1, r_2, ...) \in B \mid \forall i: r_i = (..., (..., out_i, ...), ...) \wedge out_i = out_{i+1} \lor out_{i+1} = out_{i+2} \}$ is the stability property, meaning that there are no two consecutive changes in output.
A log of a valid execution is shown in Figure 4 (where the tick number is omitted).
Figure 5a shows the logs of a faulty execution. Two components violate their specifications (incorrect values are underlined): change and $h$, both at the third instant. The stability property $\mathcal{P}$ is violated at the fourth output. Let us apply our definitions in order to analyze causality of each of the two faulty components.
- In order to check whether change is a necessary cause, we first compute the unaffected prefixes $UP_{\mathcal{G}}(\vec{tr}, \{ change \})$ with respect to the violation by change, as shown in Figure 5b. Next we compute the set of counterfactuals, according to Definition 9, as $\vec{tr}'\uparrow'$, where $\vec{tr}'$ is shown in Figure 5c. $\mathcal{P}$ is satisfied by the (unique) counterfactual trace, hence change is a necessary cause. We can show, using the same construction, that $h$ is not a sufficient cause for the violation of $\mathcal{P}$.
- In order to check whether change is a sufficient cause, we compute the unaffected prefixes $UP_{\mathcal{G}}(\vec{tr}, \{ h \})$ with respect to the violation by $h$, as shown in Figure 5d. Due to change being (incorrectly) true, the only possible counterfactual trace according to Definition 9 is the one shown in Figure 5e. $\mathcal{P}$ is satisfied by the unique counterfactual trace, hence change is not a sufficient cause. We can show, using the same construction, that $h$ is a necessary cause for the violation of $\mathcal{P}$.
The log of Figure 5a shows a case of joint causation: both change and $h$ are necessary causes for the violation of $\mathcal{P}$ in $\vec{tr}$.
7 Related Work
Causality has been studied for a long time in different disciplines (philosophy, mathematical logic, physics, law, etc.) before receiving an increasing attention in computer science during the last decade. Hume discusses definitions of causality in [15]:
Suitably to this experience, therefore, we may define a cause to be an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second. Or in other words where, if the first object had not been, the second never had existed.
In computer science, various approaches to causality analysis have been developed recently. They differ in their assumptions on what pieces of information are available for causality analysis: a model of causal dependencies, program code, a program as a black-box that can be used to replay different scenarios, the observed actual behavior (e.g. execution traces, or inputs and outputs), and/or the expected behavior (that is, component specifications). Existing frameworks consider different subsets of these entities. We cite the most significant settings and approaches for these settings.
A specification and an observation In [9], causality of components for the violation of a system-level property under the BIP interaction model [10] has been defined using a rudimentary definition of counterfactuals where only faulty traces are substituted but not their effects on the traces of other components. This definition suffers from the conditions for causality being true by vacuity when no consistent counterfactuals exist. A slightly improved approach is used in [26] for blaming in real-time systems. A preliminary version of our formalization presented here is instantiated in [11] to analyze necessary causality on real-time systems whose component specifications and expected system-level property are modeled as timed automata.
With a similar aim of independence from a specific model of computation as in our work, [24] formalizes a theory of diagnosis in first-order logic. A diagnosis for an observed incorrect behavior is essentially defined as a minimal set of components whose failure explains the observation.
A causal model [14] proposes what has become the most influential definition of causality for computer science so far, based on a model over a set of propositional variables partitioned into exogenous variables $U$ and endogenous variables $V$. A function $F_X$ associated with each
A model or program and a trace In several applications of Halpern and Pearl’s SEM, the model is used to encode and analyze one or more execution traces, rather than a behavioral model.
A set of traces [19] extends the definition of actual causality of [14] to totally ordered sequences of events, and uses this definition to construct from a set of traces a fault tree. Using a probabilistic model, the fault tree is annotated with probabilities. The accuracy of the diagnostic depends on the number of traces used to construct the model. An approach for on-the-fly causality checking is presented in [22].
An input and a black box Delta debugging [28] is an efficient technique for automatically isolating a cause of some error. Starting from a failing input and a passing input, delta debugging finds a pair of a failing and a passing input with minimal distance. The approach is syntactical and has been applied to program code, configuration files, and context switching in schedules. By applying delta debugging to program states represented as memory graphs, analysis has been further refined to program semantics. Delta debugging isolates failure-inducing causes in the input of a program, and thus requires the program to be available.
8 Conclusion
In this article we have developed a general framework for causality analysis of system failures. Applications include identification of faulty components in black-box testing, recovery of critical systems at runtime, and determination of the liability of component providers in the aftermath of a system failure.
For the sake of simplicity and generality we have provided a low-level formalization of blaming. The tagged signal model [21] may be used as a formal basis for representing specific models of communication in our approach. As analyzing necessary (resp. sufficient) causality amounts to verifying a safety (resp. liveness) property on the possibly infinite language \( C_S(\mathbf{tr}, I) \), blaming is undecidable in general. In order to make the definitions of causality effectively verifiable and automatize the analysis, we will reformulate them as operations on symbolic models, and use efficient data structures such as the event structures used in [3] for distributed diagnosis. Previous versions of our technique have been instantiated for a subset of the Bip component framework [8] and networks of timed automata [11], implemented in a prototype tool called LoCA (Logical Causality Analyzer), and tested on several case studies.
In closed-loop control systems, an alternative (counterfactual) behavior of the controller is likely to impact the physical process. For instance, when analyzing causality in a cruise control system, a counterfactual trace with different brake or throttle control will impact the speed of the car. Therefore the model of computation has to be expressive enough to include a faithful model of the physical environment in the system.
The presented approach is not a push-button solution for blaming. For instance, in the case of two component failures \( f_1 \) and \( f_2 \) where \( f_2 \) does not lie within the unaffected prefixes of \( f_1 \), our framework lacks information to decide whether \( f_2 \) was entailed by \( f_1 \), or occurred independently. Future work will refine the approach by taking additional available pieces of information into account. For example, in some situations such as post-mortem analysis the (black-box) components may be available, in which case counterfactual scenarios could be replayed on the system to evaluate their outcome more precisely.
Going a step further, we intend to investigate how to ensure accountability [20] by construction, that is, designing systems in such a way that, under some hypotheses, causes for system-level failures can be determined without ambiguity. To this end, the code of the components should be instrumented so as to log relevant information for analyzing causality with respect to a set of properties to be monitored. For instance, precise information on the actual (partial) order of execution can be preserved by tagging the logged events with vector clocks [7, 23]. Whenever component failures are not observable, we have to use fault diagnosis [25] before performing causality analysis. Similar to the approach of [3] to derive logging requirements from a privacy policy that produce minimal but sufficient logs for auditing the policy, an interesting work direction will be to study how to automatically determine from the system signature, a minimal logging requirement for blaming.
References
|
{"Source-Url": "https://inria.hal.science/hal-01211484/file/blaming-final.pdf", "len_cl100k_base": 14241, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 72857, "total-output-tokens": 17571, "length": "2e13", "weborganizer": {"__label__adult": 0.0004091262817382813, "__label__art_design": 0.0006260871887207031, "__label__crime_law": 0.0005631446838378906, "__label__education_jobs": 0.0020389556884765625, "__label__entertainment": 0.0001175999641418457, "__label__fashion_beauty": 0.0002186298370361328, "__label__finance_business": 0.0004892349243164062, "__label__food_dining": 0.00042891502380371094, "__label__games": 0.0009489059448242188, "__label__hardware": 0.002346038818359375, "__label__health": 0.0009698867797851562, "__label__history": 0.0004992485046386719, "__label__home_hobbies": 0.00019502639770507812, "__label__industrial": 0.0008449554443359375, "__label__literature": 0.0008130073547363281, "__label__politics": 0.0004165172576904297, "__label__religion": 0.0007047653198242188, "__label__science_tech": 0.326171875, "__label__social_life": 0.00014162063598632812, "__label__software": 0.01061248779296875, "__label__software_dev": 0.64892578125, "__label__sports_fitness": 0.00030040740966796875, "__label__transportation": 0.001186370849609375, "__label__travel": 0.00021469593048095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59863, 0.02195]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59863, 0.68005]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59863, 0.85024]], "google_gemma-3-12b-it_contains_pii": [[0, 948, false], [948, 3547, null], [3547, 8115, null], [8115, 11908, null], [11908, 15477, null], [15477, 19125, null], [19125, 23346, null], [23346, 25881, null], [25881, 29290, null], [29290, 32876, null], [32876, 34991, null], [34991, 38497, null], [38497, 42233, null], [42233, 45432, null], [45432, 48905, null], [48905, 50225, null], [50225, 51471, null], [51471, 55280, null], [55280, 58016, null], [58016, 59863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 948, true], [948, 3547, null], [3547, 8115, null], [8115, 11908, null], [11908, 15477, null], [15477, 19125, null], [19125, 23346, null], [23346, 25881, null], [25881, 29290, null], [29290, 32876, null], [32876, 34991, null], [34991, 38497, null], [38497, 42233, null], [42233, 45432, null], [45432, 48905, null], [48905, 50225, null], [50225, 51471, null], [51471, 55280, null], [55280, 58016, null], [58016, 59863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59863, null]], "pdf_page_numbers": [[0, 948, 1], [948, 3547, 2], [3547, 8115, 3], [8115, 11908, 4], [11908, 15477, 5], [15477, 19125, 6], [19125, 23346, 7], [23346, 25881, 8], [25881, 29290, 9], [29290, 32876, 10], [32876, 34991, 11], [34991, 38497, 12], [38497, 42233, 13], [42233, 45432, 14], [45432, 48905, 15], [48905, 50225, 16], [50225, 51471, 17], [51471, 55280, 18], [55280, 58016, 19], [58016, 59863, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59863, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
63aab71623a54c980e18b14d0ffd72371568b2a6
|
[REMOVED]
|
{"Source-Url": "https://cis.temple.edu/icdcs2013/data/5000a216.pdf", "len_cl100k_base": 11440, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 43360, "total-output-tokens": 13853, "length": "2e13", "weborganizer": {"__label__adult": 0.0003516674041748047, "__label__art_design": 0.00036454200744628906, "__label__crime_law": 0.0004601478576660156, "__label__education_jobs": 0.00072479248046875, "__label__entertainment": 9.393692016601562e-05, "__label__fashion_beauty": 0.0001881122589111328, "__label__finance_business": 0.0002951622009277344, "__label__food_dining": 0.0004048347473144531, "__label__games": 0.0007805824279785156, "__label__hardware": 0.0016145706176757812, "__label__health": 0.0006132125854492188, "__label__history": 0.0004374980926513672, "__label__home_hobbies": 0.00013005733489990234, "__label__industrial": 0.000606536865234375, "__label__literature": 0.00028705596923828125, "__label__politics": 0.0003924369812011719, "__label__religion": 0.0006589889526367188, "__label__science_tech": 0.10797119140625, "__label__social_life": 9.91225242614746e-05, "__label__software": 0.00926971435546875, "__label__software_dev": 0.873046875, "__label__sports_fitness": 0.0003643035888671875, "__label__transportation": 0.0006923675537109375, "__label__travel": 0.00024068355560302737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49867, 0.02152]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49867, 0.57974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49867, 0.87041]], "google_gemma-3-12b-it_contains_pii": [[0, 5147, false], [5147, 11053, null], [11053, 15613, null], [15613, 21756, null], [21756, 25739, null], [25739, 31918, null], [31918, 37738, null], [37738, 42587, null], [42587, 44798, null], [44798, 49867, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5147, true], [5147, 11053, null], [11053, 15613, null], [15613, 21756, null], [21756, 25739, null], [25739, 31918, null], [31918, 37738, null], [37738, 42587, null], [42587, 44798, null], [44798, 49867, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49867, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49867, null]], "pdf_page_numbers": [[0, 5147, 1], [5147, 11053, 2], [11053, 15613, 3], [15613, 21756, 4], [21756, 25739, 5], [25739, 31918, 6], [31918, 37738, 7], [37738, 42587, 8], [42587, 44798, 9], [44798, 49867, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49867, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
0b984e497fe95bd2e1564e464298da6aa73f990a
|
Parallel Combination of Abstract Interpretation and Model-Based Automatic Analysis of Software
Patrick Cousot and Radhia Cousot
École Normale Supérieure DMI, 45, rue d’Ulm 75230 Paris cedex 05 France
cousot@dmi.ens.fr http://www.dmi.ens.fr/~cousot
CNRS & École Polytechnique LIX 91440 Palaiseau cedex France rcousot@lix.polytechnique.fr http://lix.polytechnique.fr/~radhia
Abstract
Formal methods combining abstract interpretation and model-checking have been considered for automated analysis of software.
A first category concerns symbolic methods where properties of the system are approximated using abstract domains. In this case, one considers approximated representations of sets of states.
A second category concerns abstract model checking where the semantics of an infinite transition system is abstracted to get a finite approximation on which temporal-logic/μ-calculus model checking can be directly applied. In this other case, one considers approximated representations of sets of transitions.
The objective of this paper is to develop a third complementary possibility of interaction between abstract interpretation and model-checking based software analysis methods. Here no approximation is made on sets of states or sets of transitions. Instead one performs an analysis of the system by abstract interpretation. This information is used to restrict the space of states and transitions which need to be explored during the verification process. The computational overhead of computing an abstract interpretation of a model to be checked can be avoided by doing the computation in parallel with the model checking and using intermediate abstract interpretation results as they become available.
1 Introduction
In the design and development of software using model-based automatic analysis – such as model checking or state space exploration – one is confronted with high complexity for very large systems and undecidability as soon as one has to consider infinite sets of states. Consequently, all properties of all systems cannot be automatically verified in finite or reasonable time. Some form of approximation has to be considered. For example syntax-driven proof techniques ultimately rely on some form of assistance from the user. Although one can prove very precise assertions with an interactive automatic theorem prover, the technique is necessarily approximate in the sense that the output of the theorem-prover may not be understandable by the user and/or the user’s answers may mislead the theorem-prover into dead-ends. Model-checking [Clarke et al. 1983] places no restriction on verifiable properties (CTL*, μ-calculus and the like) but consider only (quasi)-finite state systems. Program analysis by abstract interpretation [Cousot and Cousot 1977, 1979; Cousot 1996] places no restriction on systems/programming languages (which can be imperative, functional, logic, object-oriented, parallel) but places restrictions on verifiable properties since abstract properties are necessarily approximate. Both model-checking and abstract interpretation have benefited from mutual cross-fertilization. In particular model-checking can now consider infinite-state systems whereas in abstract interpretation it is common to consider properties significantly more complex than safety/invariance (see e.g. Dams et al. 1994; Fernandez 1993; Halbwachs 1994 and Steffen 1991).
We would like to consider here abstract model-based automatic analysis that is the model-based automatic analysis methods which are related to abstract interpretation and suggest further possible interactions.
First, symbolic verification [Burch et al. 1992; Henzinger et al. 1992; Daws et al. 1996] makes use of a compact symbolic formula representation of (the characteristic function
of) sets of states. For example, the symbolic formula can be encoded by BDDs [Akers 1978; Bryant 1986] or by affine inequality relations [Cousot and Halbwachs 1978]. Such abstract domains are of very common use when abstract interpretation is applied to program static analysis. Some symbolic abstract domains satisfy the chain condition [Karr 1976] and this directly guarantees the finite convergence of the analysis. However, most symbolic domains are very large or infinite so that, if one does not want to abandon the formal verification for lack of space or time, some form of widening [Cousot and Cousot 1992] must ultimately be used to enforce rapid convergence of the analysis algorithms. Examples of widenings are given by Halbwachs 1993, 1994 for affine inequality relations and Mauborgne 1994 for BDDs. In this case, one does not consider a faithful symbolic description of the software properties but instead an approximation of sets of states. The corresponding loss of information may be without consequences for the verification [Henzinger and Ho 1995; Jackson 1994], else it fails.
In a second form of reduction by abstraction, one considers exact properties of an approximate semantics. More precisely, one does not consider a faithful description of the software runtime behavior but instead an approximation of this semantical behavior. Once again, abstract interpretation has been used to obtain such sound approximations. Here, the main idea for model checking or state exploration of infinite or very large finite transition systems is to use an abstract conservative finite transition system on which existing algorithms designed for finite automata are directly applicable. In this context, conservative means upper-approximation for safety (∃) properties and lower-approximation for liveness (∃) properties. This semi-verification idea was first introduced by [Clarke et al. 1992] and progressively refined to cope with wider classes of temporal-logic [Kebk 1994; Dams et al. 1994; Clevland et al. 1995] or μ-calculus formulae [Graf and Loiseaux 1993; Loiseaux et al. 1995; Crilg 1995, 1996]. Partial-order approaches can be understood in this way, the loss of information being in this case without consequences on the completeness [Valmari 1993].
We would like here to suggest a new third possible interaction between abstract interpretation and model-based automatic analysis of infinite systems [Cousot 1995]. It is based on the remark that although the transition system is infinite, all behaviors considered in practice may be finite e.g. when there is a termination requirement, or more generally a liveness requirement excluding infinite behaviors. In this case, abstract interpretation may be used, on the infinite state system, to eliminatate the impossible potentially infinite behaviors. In the favorable case, this preliminary analysis by abstract interpretation may be used to restrict the states which must be explored to a finite number. Even in the case of finite but very large state spaces, the method can be useful to reduce the part of the state graph which need to be explored for verification, in parallel with this verification, that is at almost no cost in time.
2 Combining Abstract Interpretation and Model-Checking
The general idea is to improve the efficiency of symbolic model checking algorithms for verifying concurrent systems by using properties of the system that can be automatically inferred by abstract interpretation.
2.1 Transition Systems
The considered (real-time) concurrent system is assumed to be modeled by a transition system, that is tuple (S, t, I, F) where S is the set of states, t ⊆ S × S is the transition relation, I ⊆ S is the set of initial states and F ⊆ S is the set of final states. There is no finiteness restriction on the set S of states. Moreover initial and final states must be understood in a broad sense. For a terminating program this can be the states in which execution can start and end. For a non-terminating process this can be respectively the states in which a resource is requested and those in which it has later been allocated. For simplicity, we assume that initial and final states are disjoint (I ∩ F = ∅). An example of transition system is given in Figure 1. Such transition systems have been used to introduce abstract interpretation in a language independent way, since they model small-step operational semantics of programs [Cousot and Cousot 1979].
The complement −t of a set of states P ⊆ S is {s ∈ S | s ∉ P}. The left-restriction t− of a relation t to P ⊆ S is {(s, t′) ∈ t | s ∈ P}. The composition of relations is t ◦ s ≝ {(s, s′′) | ∃ s′ ∈ S; (s, s′) ∈ t ∧ (s′, s′′) ∈ t}. The states of the transition relation t are defined inductively by t0 ≝ 1S and tn+1 ≝ t ◦ tn for n ≥ 0. The reflexive transitive closure t∗ of the transition relation t is t∗ ≝ ∪n≥0 tn.
The pre-image pre[t] P of a set P ⊆ S of states by a transition relation t is pre[t] P ≝ {s | ∃ s′; (s, s′) ∈ t ∧ s′ ∈ P}. The post-image post[t] P of a set P ⊆ S of states by a transition relation t is post[t] P ≝ {s′ | ∃ s; s ∈ P ∧ (s, s′) ∈ t}. This is illustrated in Figure 2(a) and Figure 2(b). We have the least fixpoint characterizations pre[t] P = Ip ≝ λX. P ∪
pre[t] \times and post[t^*] P = \{p \in X \mid P \cup post[t] X \} (see e.g. Cousot 1978, 1981).
2.2 Minimum Delay Problem
The minimum delay problem (see e.g. Halbwachs 1993) consists in computing the length \( \ell \) of (i.e. number of edges in) a shortest path from an initial state in \( I \) to a final state in \( F \).
\[
\ell \overset{\Delta}{=} \min\{ n \mid \exists s \in I, s' \in F : \{s, s'\} \in t^n \}
\]
\[
\min \emptyset \overset{\Delta}{=} \infty
\]
An example of transition system and corresponding minimum delays is given in Figure 3(a).
The following symbolic model checking minimum delay algorithm is due to Campos et al. 1995:
```
procedure minimum(I, F);
R := I;
n := 0;
stable := (R \cap F \neq \emptyset);
while ~stable do
R' := R \cup post[t] R;
n := n + 1;
stable := (R = R') \lor (R' \cap F \neq \emptyset);
R := R';
od;
return if (R \cap F \neq \emptyset) then n else \infty;
```
An example of execution trace of the “minimum1” algorithm is given in Figure 4(a). In order to consider infinite state sets, it is necessary to enforce finite convergence. Abstract model checking techniques, with abstractions of transitions, are not applicable since they would lead to erroneous results. Only a lower or upper bound of the minimum delay can be obtained in this way. Classical symbolic methods for speeding up model checking algorithms such as BDDs to encode boolean formulas representing sets of states, the transition relation, and so on or “on-the-fly” property checking, without state graph generation are applicable in this case. However, there is a serious potential inefficiency problem because of useless exploration of dead-end states which are reachable but cannot lead to a final state. These dead-end states are marked \( \circ \) in Figure 4(a).
However, we can still use abstract interpretation to cut down the size of the model-checking search space by determining a super-set \( A \) of the ascendants of the final states (the principle of determination of \( A \) by abstract interpretation will be precisely defined in Section 4.1):
\[
\text{pre}[t^*] F \subseteq A,
\]
as illustrated in Figure 3(b), which can then be used to restrict the exploration of the transition graph for computing the minimum delay. The revisited minimum delay algorithm is now:
```
procedure minimum2(I, F);
R := I;
n := 0;
stable := (R \cap F \neq \emptyset);
while ~stable do
R' := R \cup \{post[t] R \cap A\};
n := n + 1;
stable := (R = R') \lor (R' \cap F \neq \emptyset);
R := R';
od;
return if (R \cap F \neq \emptyset) then n else \infty;
```
A trace of this algorithm “minimum2” is given in Figure 4(b).
Observe that:
- any upper-approximate solution \( \text{pre}[t^*] F \subseteq A \) can be used in algorithm “minimum2”;
- the upper approximation \( A \) of \( \text{pre}[t^*] F \) which is used in the loop can be different at each iteration; and
- in the worst possible case, when the analysis by abstract interpretation is totally unfruitful, we have \( A = S \) in which case algorithm “minimum2” simply amounts to algorithm “minimum1”.
2.3 Maximum Delay Problem
The maximum delay problem consists in computing the length \( m \) of (i.e. number of edges in) a longest path from an initial state in \( I \) to a final state in \( F \):
\[
m = \max \{ n \mid 3 \in I, s, s' \in F : (s, s') \in (\lnot F \cup t)^n \} \quad \text{max} \|s\| = \infty
\]
An example of maximum delays is given in Figure 5(a). The following maximum delay algorithm has been proposed by Campos et al. 1995.
```
procedure maximum1 (I, F);
R' := S;
n := 0;
while (R \neq R' \land R \cap I \neq \emptyset) do
R' := R;
n := n + 1;
R := \text{pre}[t] R' \cap (S \cup F);
od;
return if \( R' = R \) then \( \infty \) else \( n \);
```
Figure 6: Execution trace of maximum algorithm
An example of execution trace of the “maximum1” algorithm is given in Figure 6(a). Although this is left unspecified by Campos et al. 1995, the correctness of this maximum delay algorithm relies on several hypotheses. First the sets of initial states \( I \) and final states \( F \) must be nonempty and disjoint. Second, there exists at least one path from some initial state to some final state. Third, there is no path starting from an initial state, ending in a blocking state (with no successor by the transition relation) never passing through a final state. Fourth and finally, there is no infinite or endless cyclic path starting from an initial state and never passing through a final state. If one of these hypotheses is not satisfied, the algorithm maximum1 returns an upper bound of the maximal path length.
Once again abstraction of the transition system would also provide an upper bound of the maximal path length hence would be incorrect. Exact symbolic methods have a potentially serious inefficiency problem because of useless exploration of dead-end states (marked \( \Phi \) in Figure 6(a)) which are not reachable from initial states or cannot lead to a final state. Observe that partial-order methods [Valmari 1993], which are based on the fact that in concurrent systems, the total effect of a set of actions is often independent of the order in which the actions are taken, would locally reduce the number of considered paths, but would not perform a global elimination of the remaining paths that are useless for the verification.
Once again an automatic analysis by abstract interpretation can determine a super-set \( U \) of the descendents of the initial states \( I \) which are descendants of the final states \( F \) (the principle of determination of \( U \) by abstract interpretation will be precisely defined in Section 4.3):
\[
U := \text{post}[t^*] I \cap \text{pre}[t^*] F = \{ s \mid 3 t s' \in I, s'' \in F : (s', s) \in t^* \land (s, s'') \in t^* \}
\]
The set of descendants of the initial states $I$ which are ascendants of the final states $F$ is illustrated by Figure 5(b). This leads to a revisited maximum delay algorithm, as follows:
code
```plaintext
procedure maximum2 (I, F);
R' := S;
n := 0;
R := (U - F);
while ($R \neq R' \cap R \cap I \neq \emptyset$) do
$R' := R$;
$n := n + 1$;
$R := post[I] R' \cap (U - F)$;
od:
return if ($R' = R$) then $\infty$ else $n$;
```
An example of execution trace of the “maximum2” algorithm is given in Figure 6(b). Observe that any upper-approximation post[I] F \cap post[I] F \subseteq U of the descendants of the initial states I which are ascendants of the final states F is correct, since in the worst possible case, when U = S, algorithm “maximum2” simply amounts to “maximum1”. Moreover, a different upper approximation F of post[I] F \cap post[I] F can be used at each iteration in the loop. Notice also that this restriction idea applies both to exhaustive and on-the-fly state space exploration techniques.
In the case of symbolic model-checking, say with BDDs (or polyhedra), the intersection pre[I] R' \cap (U - F) may be a BDD (or polyhedra) of much greater size than post[I] R, although it describes a smaller set of states. In this case, the computation of the intersection is not mandatory, the information being still useful for simplifying the BDD (or polyhedra), e.g. by pruning, in order to reduce its size. Several such operators have been suggested such as the cofactor [Tonati et al. 1990], constrain [Coudert, Berthet, and Madre 1990] or restrict [Coudert, Madre, and Berthet 1990] operators on BDDs and the polyhedron simplification of [Halbwachs and Raymond 1996].
3 Classical Abstract Interpretation Problems
Finding upper (or dually lower) approximations of the sets of:
- descendants post[I] I of the initial states I [ Cousot 1981];
- ascendants pre[I] F of the final states F [ Cousot 1981]; and
- descendants post[I] I \cap post[I] F of the initial states I which are ascendants of the final states F [ Cousot 1978; Cousot and Cousot 1992a].
are classical problems in abstract interpretation with applications to:
- optimizing compilers;
- parallelization, vectorization, partial evaluation, program transformation;
- abstract debugging, and the like.
The following example of PASCAL program analysis of the descendants of the initial states using an interval approximation of set of possible values of variables [Cousot and Cousot 1977; Cousot 1981] has been given by Bourdoncle 1993. All comments \{ ... \} have been generated automatically by the analyzer. They clearly show that non-trivial information about the infinite state space (in this case run-time values of variables at each program points) can be determined automatically by abstract interpretation:
code
```plaintext
program Variant_of_function_31_of_McCarthy;
var X, Y: integer;
function F(X: integer): integer;
begin
if X > 100 then $F := X - 10$
else
$F := F(F(F(F(X + 33))))$;
end;
begin
readln(X);
$Y := F(X)$;
end.
```
It has been proved automatically that the result of $F$ is necessarily greater than or equal to 91, if the call ever terminates.
The following example of approximation of the descendants of the initial states which are ascendants of the final states using interval analysis has also been given by Bourdoncle 1993. The comment \{ X \ true? \} has been included by the programmer. It is an intermittent assertion specifying the final states in that it stipulates that program execution should definitely terminate:
code
```plaintext
program Variant_of_function_31_of_McCarthy;
var X, Y: integer;
function F(X: integer): integer;
begin
if X > 100 then $F := X - 10$
else
$F := F(F(F((F(X + 90))))))$;
end;
begin
readln(X);
if $X > 100$ then
$Y := F(X)$;
\{ X \ true? \}
end.
```
The other comment \{ X > 100 \} has been automatically generated by the abstract debugger (for short, the other automatically generated comments are not shown). If not satisfied, the program must necessarily go wrong either because of an inevitable run-time error (such as out of memory) or because of certain nontermination. This is precisely the case because of cycles such as F(100) \rightarrow F(190) \rightarrow F(180) \rightarrow F(170) \rightarrow F(160) \rightarrow F(150) \rightarrow F(140) \rightarrow F(130) \rightarrow F(120) \rightarrow F(110) \rightarrow F(100) \rightarrow \ldots and so on. The intended constant was not 90 but 91! This shows that the restriction of the set of states of a transition system to those that lie on a path from an initial state to a final state by a preliminary abstract interpretation can cut down infinite paths.
4 Parallel Combination of Abstract Interpretation and Model Checking
Abstract interpretation is a theory of semantic approximation [Cousot 1996]. Here approximation means logical implication i.e. subsets of states inclusion. Moreover the semantics to be approximated is the forward collecting semantics post[I] F, the backward collecting semantics pre[I] F.
95
[Consort 1978; Consort and Consort 1979] or the descendants
post([p]F) \cap \text{pre}([p]F) of the initial states \( I \) which are ascen-
dants of the final states \( F \) [Consort 1978; Consort and Consort
1992a]. We briefly recall how the upper-approximations \( D \)
of post([p]F). A of \text{pre}([p]F) and \( U \) of post([p]F) \cap \text{pre}([p]F)
can be automatically computed by abstract interpretation.
This is necessary to show how intermediate abstract interpre-
tation results can be used, as they become available, to
reduce the size of the state space to be explored during par-
allel model-checking.
4.1 Forward Program Analysis by Abstract
Interpretation
In order to obtain an upper approximation \( D \) of post([p]F) = \text{lfp}^\downarrow \text{\lambda } \downarrow \text{i} \cup \text{post}[t]X \) one considers a Galois connection
\[ \langle \rho(S), \subseteq \rangle \sqsubseteq \langle L, \sqsubseteq \rangle \]
that is, by definition, a pair of maps \( \rho \in \rho(S) \mapsto L \) and \( \gamma \in L \mapsto \rho(S) \) from the powerset \( \rho(S) \) ordered by subset
inclusion \( \subseteq \) into the poset \( \langle L, \sqsubseteq \rangle \) of abstract properties\footnote{Weaker models of abstract interpret-
cation can be considered [Consort and Consort 1992b], which are mandatory when considering
abstract properties with no best approximation [e.g. Consort and Halb-
wichs 1978].} partially ordered by \( \subseteq \) such that:
\[ \forall P \in \rho(S) : \forall Q \in L : \alpha(P) \subseteq Q \Rightarrow P \subseteq \gamma(\alpha(P)). \]
It follows that \( \alpha \) and \( \gamma \) are necessarily monotonic. Moreover
any concrete property \( P \in \rho(S) \) has a best (i.e. most precise)
upper approximation \( \alpha(P) \) in \( L \), such that \( P \subseteq \gamma(\alpha(P)) \).
We write \( \langle \rho(S), \subseteq \rangle \sqsubseteq \langle L, \sqsubseteq \rangle \) when \( \alpha \) is surjective (or equivalently \( \gamma \) is injective or \( \alpha \circ \gamma = 1_L \) is the identity on \( L \)). In this case, the poset \( \langle L, \sqsubseteq \rangle \) is necessarily a complete
lattice \( \langle L, \sqsubseteq, \top, \bot, \sqcup, \sqcap, \sqsubseteq \rangle \) with \( \alpha(\rho(S)) = L \). \( \alpha(P) \) should
be machine-representable which, in general, may not be the case of \( P \).
The appropriate choice of the abstract domain \( L \) is problem
dependent. The design and composition of abstract domains has been extensively developed
in the abstract interpretation literature and will not be further con-
considered here. For example, Clarke et al. 1992, Cleaveland et al.
1993, Dams et al. 1994 and others implicitly consider the Galois connection \( \langle \rho(S), \subseteq \rangle \sqsubseteq \langle L, \sqsubseteq \rangle \), where \( S \) is the set of concrete states and \( A \) is the set of abstract states, is
necessarily of the form \( \rho(\alpha(X)) = \{ h(x) \mid x \in X \} \) and \( \gamma(Y) = \{ x \mid \exists h \in Y \} \) where \( \alpha(S) \mapsto L \) is the approxima-
tion mapping. If \( \alpha \) is surjective (as assumed e.g. in Jackson
1994), then so is \( \alpha \) whence \( \langle \rho(S), \subseteq \rangle \sqsubseteq \langle L, \sqsubseteq \rangle \).
We then use the fact that if \( L, \sqsubseteq, 0, \forall \) is a cpo, the
pair \( (\alpha, \gamma) \) is a Galois connection \( (M, \sqsubseteq) \sqsubseteq \langle L, \sqsubseteq \rangle \), \( T \in M \mapsto \top M \) and \( T^4 \) in \( L \mapsto \top L \) are monotonic and \( \forall y \in L : \alpha \circ T \circ \gamma(y) \preceq T(\gamma(y)) \) then \( \text{lfp}^\downarrow T \gamma \gamma(\text{lfp}^\downarrow T \gamma) \) and
equivalently \( \text{lfp}^\downarrow T \gamma \subseteq \text{lfp}^\downarrow T \gamma \), see Consort and Consort
1979. So let \( F \in L \mapsto \top L \) be such that \( \alpha \circ (\lambda X \mapsto I) \cup \text{post}[t]X \) \( \gamma \in F \) pointwise. The transfinite iteration sequence
\( F^0 \overset{\alpha(0)}{=} F, F^{n+1} \overset{\alpha}{=} F(F^n), F^\lambda \overset{\lambda}{=} \bigcup_{n<\lambda} F^n \) for limit
ordinals is ultimately stationary and converges to \text{lfp}^\downarrow L \text{f}
F .
This directly leads to an iterative algorithm which is finitely
convergent when \( L \) satisfies the ascending chain condition\footnote{Any strictly
ascending chain \( x_0 \subseteq x_1 \subseteq \cdots \) of elements of \( L \) is
necessarily finite.}.
In general however, the iterates \( F^n, \lambda \geq 0 \) do not con-
verge to \text{lfp}^\downarrow F in finitely many steps, so that one must
resort to a widening operator \( \bigvee \) which can be used both to
upper-approximate missing lubs [as in e.g. Consort and Halbwachs
1978] and to enforce finite convergence of increasing
iterations [Consort and Consort 1977]. The widening
operator \( \bigvee \in L \times L \mapsto \to \) should be an upper bound
(that is \( \forall x, y \in L : x \sqsubseteq x \sqcup y \) and \( \forall x, y \in L : x \subseteq x \sqcup y \) and enforce finite convergence (for all increasing
chains \( x^0 \sqsubseteq x^1 \sqsubseteq \cdots \subseteq x^i \sqsubseteq \cdots \) the increasing chain defined
by \( y^0 = x^0, y^{i+1} = y^i \sqcup x^{i+1}, \cdots \) is not strictly increasing).
The upward iteration sequence with widening is \( F^0 = \alpha(0), F^{n+1} = F(F^n) \) if \( F(F^n) \subseteq F \) and \( F^{n+1} = F(F^n) \) otherwise.
It is ultimately stationary and its limit \( F \) is a sound upper approximation of \text{lfp}^\downarrow F in that \text{lfp}^\downarrow F \subseteq F .
If \( F(F^n) \subseteq F \) and the iterates \( F^0 = F, F^{n+1} = F(F^n) \) and \( F^\lambda \overset{\lambda}{=} \bigcup_{n<\lambda} F^n \) for limit ordinals do not finitely converge, we use a
narrowing operator \( \Delta \) to speed up the convergence. A narrowing operator \( \Delta \in L \times L \mapsto \to \) is such that \( \forall x, y \in L : x \sqsubseteq y \Rightarrow x \sqsubseteq \Delta x \sqcup y \) and for all decreasing chains \( x^0 \sqsupseteq x^1 \sqsupset x^2 \sqsupset x^3 \cdots \) the decreasing chain defined by \( y^0 = x^0, y^1 = x^1, \cdots \) is not strictly decreasing.
So, if \( F(X) = X \sqsubseteq F(F^n) \subseteq F \) then the downward iteration
sequence with narrowing is defined by \( F^0 = F, F^{n+1} = F(F^n) \) if \( F(F^n) = F \) and \( F^{n+1} = \Delta F \sqsubseteq F \) otherwise. This
iteration sequence is ultimately stationary and its limit \( F \) is a
sound upper approximation of the fixpoint \( X \sqsubseteq F \subseteq F \) which is better than the one \( F \) obtained by widening. In
conclusion \text{lfp}^\downarrow F \subseteq F \subseteq F \) so that by monotony \text{post}([p]F) \subseteq \text{lfp}^\downarrow \text{\lambda } \downarrow \text{i} \cup \text{post}[t]X \subseteq \gamma(\text{lfp}^\downarrow F) \subseteq \gamma(\text{lfp}^\downarrow F) . It follows
that we can choose the upper approximation \( D \) of post([p]F) to be \( D \overset{\alpha}{=} \gamma(\text{lfp}^\downarrow F) . \)
As already mentioned design of the abstract algebra \( \langle L, \sqsubseteq, \top, \bot, \sqcup, \sqcap, \sqsubseteq \rangle \) and of the transformer \( F \) (usually composed out of the primitives \( f_1, \ldots, f_n \)) are problem
dependent and will not be further considered here.
4.2 Backward Program Analysis by Ab-
stract Interpretation
The situation is similar for computing an upper approxima-
tion \( A \) of \text{pre}([p]F) = \text{lfp}^\downarrow \text{\lambda } \downarrow \text{i} \cup \text{pre}[t]X \) using \( B \in \to \) \( L \) such that \( \alpha \circ (\lambda X \mapsto \top X) \circ \gamma \in B \), pointwise\footnote{More generally one could consider a different abstract domain for
backward analysis, the generalization being immediate.}.
One first uses an upward iteration sequence with widening
to converge to \( B \) followed by a downward iteration sequence
with narrowing converging to \( F \) such that \( \text{lfp}^\downarrow B \subseteq B \subseteq \text{lfp}^\downarrow F \) where
by monotony \text{pre}([p]F) = \text{lfp}^\downarrow \text{\lambda } \downarrow \text{i} \cup \text{pre}[t]X \subseteq \gamma(B) \subseteq \gamma(\text{lfp}^\downarrow F) . It follows that we can choose the upper
approximation \( A \) of \text{pre}([p]F) to be \( A \overset{\alpha}{=} \gamma(B) . \)
4.3 Combining Forward and Backward Program Analysis by Abstract Interpretation
In order to upper approximate \( \text{post} [p^*] I \cap \text{pre} [p^*] F \), we use \( \mathcal{F} \) which is \( \lambda X \cdot (I \cap \text{post} [p^*] X) \) up to abstraction and \( \mathcal{B} \) which is \( \lambda X \cdot \text{pre} [p^*] X \) up to abstraction. The following approximation sequence is always more precise than or equal to \( \mathcal{F} \cap \mathcal{B} \) [Consort 1978; Consort and Consort 1992a]:
- \( \mathcal{U}^0 \) is the limit of the upward iteration sequence with widening for \( \mathcal{F} \) and \( \mathcal{U}^0 \) is the corresponding downward iteration sequence with narrowing;
- \( \ldots \)
- \( \mathcal{U}^{2n+1} \) is the limit of the upward iteration sequence with widening for \( \lambda X \cdot (\mathcal{U}^{2n} \cap \mathcal{F}(X)) \) and \( \mathcal{U}^{2n+1} \) is the limit of the corresponding downward iteration sequence with narrowing;
- \( \mathcal{U}^{2n+2} \) is the limit of the upward iteration sequence with widening for \( \lambda X \cdot (\mathcal{U}^{2n+1} \cap \mathcal{B}(X)) \) and \( \mathcal{U}^{2n+2} \) is the limit of the corresponding downward iteration sequence with narrowing.
Observe that all iterates of the down ward iteration with narrowing for \( \mathcal{F} \) and \( \mathcal{U}^0 \) is the corresponding downward iteration sequence with narrowing.
In the case of algorithm "minimum2", the first iterates \( \mathcal{B}^0 = \emptyset, \mathcal{B}^1, \ldots \) of the upward iteration sequence with widening for \( \mathcal{B} \) are not upper approximations of \( \text{pre} [p^*] F \). It follows that one has to choose \( A = S \) while waiting for their limit to be computed. Once available, one can use the iterates \( \mathcal{B}^0 = \mathcal{B}, \mathcal{B}^1, \ldots \) of the corresponding downward iteration sequence with narrowing as successive values of \( A \) in "minimum2". However, while waiting for \( \mathcal{B} \) to be available, the successive values of \( A \) can be chosen as the downward iterates for the greatest fixpoint \( \text{gfp}^\mathcal{B} \mathcal{B} \) since they are all upper approximations of \( \text{gfp} \subseteq \mathcal{B} \) and better than \( S \).
5 Conclusion
Existing combinations of model-checking and abstract interpretation have been concerned with the symbolic representation of abstract properties and the approximation of the state transition relation, in both cases with or without loss of information. Building upon [Consort 1993], we have proposed another form of combination which, by a preliminary analysis of the system (in forward, backward or combined direction) or better, in parallel with verification, one can reduce the size of the part of the state graph that has to be explored (in the other direction) for verification by exhaustive or on-the-fly model-checking. The combination is at almost no cost since the parallel execution of the abstract interpreter and the model checker is asynchronous, abstract properties being used by the model checker as they become available. Other forms of restrictions have been proposed by Halbwachs and Raymond 1996 which are amenable to parallelization in a similar way.
This method, which makes no approximation on the states and transitions of the model, is nevertheless partial since it is not guaranteed that the reduction always leads to a finite state exploration sub-graph. Because of its precision, it should be tried first or in parallel. In case of computational verification costs which remain prohibitive despite the restriction, one can always later resort to the more classical property and transition abstraction.
Remarkably enough, the method then remains applicable to the more abstract model of properties and/or transitions. Indeed, by Consort and Consort 1992c, the abstract interpretation of the refined model will always be more precise than the analysis of the abstract model. Consequently the preliminary analysis has not been done for nothing. It follows that the idea can always be applied, and thanks to an abstract interpretation performed in parallel with the model-checking verification, should have a marginal cost only.
Similar restriction ideas apply to bisimulation equivalence checking [see e.g. Bonajjani et al. 1992; Fernandez 1993]. They seem indispensable to cope with infinite state systems, real-time systems [Halbwachs 1994] and hybrid systems [Halbwachs et al. 1994], in particular to take possible values of variables, messages, queues, and the like into account, which would be a significant step in the automated analysis of software.
Acknowledgments
We thank the anonymous AAS '97 referees for their comments.
|
{"Source-Url": "http://cs.nyu.edu/~pcousot/publications.www/CousotCousot-AAS-97-p91-98.pdf", "len_cl100k_base": 8648, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 36265, "total-output-tokens": 9509, "length": "2e13", "weborganizer": {"__label__adult": 0.0003657341003417969, "__label__art_design": 0.00041794776916503906, "__label__crime_law": 0.000530242919921875, "__label__education_jobs": 0.0007228851318359375, "__label__entertainment": 6.377696990966797e-05, "__label__fashion_beauty": 0.00016963481903076172, "__label__finance_business": 0.0002655982971191406, "__label__food_dining": 0.0003893375396728515, "__label__games": 0.0006618499755859375, "__label__hardware": 0.000911712646484375, "__label__health": 0.0007071495056152344, "__label__history": 0.0002586841583251953, "__label__home_hobbies": 0.00010532140731811523, "__label__industrial": 0.0005040168762207031, "__label__literature": 0.00031828880310058594, "__label__politics": 0.0003597736358642578, "__label__religion": 0.0005602836608886719, "__label__science_tech": 0.042572021484375, "__label__social_life": 8.785724639892578e-05, "__label__software": 0.005741119384765625, "__label__software_dev": 0.94287109375, "__label__sports_fitness": 0.00032067298889160156, "__label__transportation": 0.0006875991821289062, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33194, 0.02106]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33194, 0.53453]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33194, 0.84502]], "google_gemma-3-12b-it_contains_pii": [[0, 3793, false], [3793, 9048, null], [9048, 11726, null], [11726, 14893, null], [14893, 20011, null], [20011, 28449, null], [28449, 33194, null], [33194, 33194, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3793, true], [3793, 9048, null], [9048, 11726, null], [11726, 14893, null], [14893, 20011, null], [20011, 28449, null], [28449, 33194, null], [33194, 33194, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33194, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33194, null]], "pdf_page_numbers": [[0, 3793, 1], [3793, 9048, 2], [9048, 11726, 3], [11726, 14893, 4], [14893, 20011, 5], [20011, 28449, 6], [28449, 33194, 7], [33194, 33194, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33194, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
9cf54948d6e36b2270af9728b8fd1d6e164fa40c
|
Postprint
This is the accepted version of a paper presented at *The 27th IEEE International Conference on Software Maintenance*.
Citation for the original published paper:
N.B. When citing this work, cite the original published paper.
Permanent link to this version:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147102
Sahara: Guiding the Debugging of Failed Software Upgrades
Rekha Bachwani, Olivier Crameri†, Ricardo Bianchini, Dejan Kostić†, and Willy Zwaenepoel†
Rutgers University
{rbachwan,ricardob}@cs.rutgers.edu
†EPFL
{olivier.crameri.dejan.kostic,willy.zwaenepoel}@epfl.ch
Abstract—Today, debugging failed software upgrades is a long and tedious activity, as developers may have to consider large sections of code to locate the bug. We argue that failed upgrade debugging can be simplified by exploiting the characteristics of upgrade problems to prioritize the set of routines to consider. In particular, previous work has shown that differences between the computing environment in the developer’s and users’ sites cause most upgrade problems. Based on this observation, we design and implement Sahara, a system that identifies the aspects of the environment that are most likely the culprits of the misbehavior, finds the subset of routines that relate to those aspects, and selects an even smaller subset of routines to debug first. To achieve its goals, Sahara leverages feedback from a large number of users, machine learning, and static and dynamic source analyses.
We evaluate Sahara for three real upgrade problems with the OpenSSH suite, one synthetic problem with the SQLite database, and one synthetic problem with the uServer Web server. Our results show that the system produces accurate recommendations comprising only a small number of routines.
I. INTRODUCTION
Modern software systems are complex and comprise many interacting and dependent components. Frequent upgrades are required for some or all components to fix bugs, patch security vulnerabilities, add or remove features, and other critical tasks. Unfortunately, many of the upgrades either fail or produce unwanted behavior. A survey conducted by Crameri et al. [9] showed that 90% of system administrators perform upgrades at least once a month, and that 5–10% of them is problematic. Interestingly, they also found that the most common source of upgrade problems is the difference between the environment (i.e., version of operating system and libraries, configuration settings, environment variables, hardware, etc) at the developer’s site and the users’ sites. Such problems are difficult (or maybe impossible) to prevent because the developer cannot foresee, much less test her software for, every possible environment in which the software might be used.
When upgrades misbehave at some user sites, the developers receive bug reports and complaints. In some cases, the developers may also receive logs of failed executions and/or core dumps. Developers often undergo several exchanges with the users to gather all the pertinent information. Thereafter, the developers examine the information to locate the likely causes of the misbehavior. This process is long and tedious, as developers may have to consider large chunks of code to locate the root cause of the misbehavior.
In this paper, we propose Sahara, a system that simplifies the debugging of environment-related upgrade problems by pinpointing the subset of routines and variables that is most likely the source of misbehavior. Sahara’s design was motivated by two observations: (1) since the problem was caused by one or more aspects of the user environment, it is critical to identify these suspect aspects and their effects throughout the code; and (2) since the previous version of the software behaved properly, it is critical to identify the behavioral differences between the previous and upgraded versions.
Given these observations, the root cause of an upgrade problem is most likely to be in the code that is both (1) affected by the suspect aspects of the environment and (2) whose behavior has deviated after the upgrade. To isolate this code, Sahara combines information collected from many users of the software, machine learning techniques, static and dynamic source analyses. The machine learning and the static analysis run at the developer’s site, whereas the data collection and dynamic analysis run at the users’ sites (for those users who are willing to run Sahara). Sahara targets C applications written for Unix-like operating systems.
In more detail, Sahara applies feature selection [35] on the environment and upgrade success/failure information received from users to rank the aspects of the environment that are most likely to be the source of the misbehavior. Then, it uses def-use static analysis [1] to identify the set of variables whose values derive directly or indirectly from the suspect aspects. The routines in which these variables are used become the first set of potential culprits. At this point, Sahara deploys instrumented versions of the current and upgraded version of the code to the user sites that reported misbehaviors. It then runs the instrumented versions automatically (and with the same inputs) to collect information about all routine calls and returns. Using this information, it uses value spectra [36] to identify the set of routines that caused the behavior to deviate from one execution to the other at each misbehaving site. These sets of routines are also considered suspects. Finally, Sahara intersects the sets of suspect routines resulting from the static and dynamic analyses; those in the intersection should be debugged first.
To evaluate Sahara, we study three real upgrade problems with the OpenSSH suite, one synthetic problem in the SQLite database engine, and one synthetic problem with the uServer Web server. Our results demonstrate that Sahara produces recommendations that always include the routines responsible for the bugs. The exact number of recommended routines depends on the characteristics of the information received from users. In experiments where we varied these characteristics widely, Sahara recommends 2–21 suspect routines that should be debugged first. These numbers can be 20x smaller than the number of routines affected by the upgrades. Compared to static and dynamic analyses alone, Sahara reduces the numbers
of suspect routines by 1.4x–6x and 14x–40x, respectively. Given its accuracy and these large reductions, we expect that Sahara can significantly reduce debugging time in practice.
II. SAHARA: PRIORITIZING UPGRADE DEBUGGING
A. A Motivating Example
To make our exposition more concrete, let us look at a simple example in Fig 1. The example takes the name of an environment variable as input using a call to `getenv()` (line 18). It then checks if the length of the string is smaller than or equal to 9 (line 4). Depending on the outcome of the comparison, a different output is produced (lines 21–24).
Let us assume that the upgrade simply changes the sign in line 4 from “<” to “<”. This upgrade will fail at user sites where the `SHELL` environment variable is exactly 9. However, the program ran successfully at these sites before the upgrade. This upgrade failure is similar to the `ProxyCommand` bug [28] that we detail in Section III-A.
The failure has two interesting characteristics. First, the upgrade fails only at a subset of user sites, which may have been the reason the bug went undetected during development. Second, despite the fact that the two versions of the code are input-compatible, the execution behavior changes with the upgrade both in terms of the path executed and the output.
Given these characteristics, identifying the aspects of the environment that correlate with the failure is a necessary first step for efficiently diagnosing the failure. In this simple example, the name of the shell is the aspect of the environment that triggers the failure. It is also important to identify the variables and routines in the code that are directly or indirectly affected by the environment. Note that the name of the shell is initially assigned to the `uname` array; only later does variable `env2` become related to the environment. Thus, variables `uname` and `env2`, as well as routines `main` and `checklength` are suspect. However, identifying these suspects is not sufficient, because the program behaved correctly before the upgrade was applied in the same environment. We also need to determine how the upgraded version of the program has deviated in behavior from the current version. This analysis would then show that routine `checklength` and `secondfunction` behave differently in the two versions, meaning that they are also suspects. The root cause of the failure is most likely to be contained in the code that is affected by both the suspect environment and whose behavior has changed after the upgrade, i.e. routine `checklength`. This routine is exactly where the bug is.
B. Design and Implementation
Overview. Figure 2 illustrates the steps involved in Sahara. First, Sahara deploys the upgrade to any users that request it (step 1). As the software executes at each user’s site, Sahara collects information about the environment and inputs used (step 2). At the end of the execution, Sahara obscures and then transfers the collected environment information (the inputs are never transferred on the network) to the developer’s site, along with a success/failure flag provided by the user (step 3). (Obviously, some users may decide not to allow any sort of information to be collected or provided to Sahara.) The information about the environment includes the version of the operating system, the version of the libraries, the configuration settings, the name and version of the other software packages installed, and a description of the hardware. A failure flag may mean that (a) the upgrade could not be properly installed or executed, (b) the upgrade caused incorrect behavior or a crash, or (c) the upgrade caused another software to misbehave [9].
Now suppose that the upgrade misbehaved at one user site at least. With the environment and success/failure information at the developer’s site, Sahara runs a machine learning algorithm to determine the aspects of the environment that are most likely to have caused the misbehavior (step 4). Next, based on def-use static analysis, Sahara isolates the variables in the code that derive directly or indirectly from those aspects; the routines that use these variables are considered suspect (step 5).
Sahara then deploys instrumented versions of the current and upgraded code to the user sites that reported failures (step
6). At each of those sites, Sahara executes both versions with the inputs collected in step 2 and collects dynamic routine call/return information (step 7). Sahara then compares the logs from the two executions to determine the routines that exhibited different dynamic behavior (step 8). This step is done at the failed user sites to avoid transferring the potentially large execution logs back to the developer’s site. Sahara then transfers the list of routines that deviated at each failed user site back to the developer’s site (step 9); the routines on these lists are considered suspect as well.
Finally, Sahara intersects the suspects from the static and dynamic analyses (step 10). It reports the intersection to the developer as the routines to debug first. If the problem is not found in this set, other suspect routines should be considered.
Next, we detail the implementation of these steps.
**Upgrade deployment, tracing, and user feedback (steps 1–3).** Upgrade deployment in Sahara is trivial. The upgraded code is available via a Web interface and can be downloaded as a package/patch by any user that wants it.
Sahara uses the Mirage tracing infrastructure, which has been detailed in [3], [9]. Thus, next we only describe the most important aspects of it. The infrastructure identifies the “environmental resources” an application depends on and then fingerprints (i.e., derives a compact representation for) them. The following resources are considered as an application’s environment: a) all files accessed read-only (such as configuration files) by the application; b) all files of a certain type (such as libraries); c) all files in the package being upgraded. Furthermore, Sahara provides an API that allows the developer to include or exclude files or directories. In addition to the data accessed during application execution, Sahara collects information about the hardware and software installed.
Again as in Mirage, Sahara provides parsers to compute a concise representation (fingerprint) for each environmental resource. The parsers know how to extract relevant information from a file based on its type and hash its content at a specific granularity. For instance, the parsers for binary files generate fingerprints at a coarser granularity than the parsers for a configuration file. We use SHA-1 to compute fingerprints of the resources. In each fingerprint, the name of the resource serves as a key and the hash of its contents as the value.
For the users who choose to participate, Sahara sends the tracing infrastructure and the parsers to their sites. During the first several executions of the upgraded software (the number of executions can be defined by the developer), Sahara collects the environment resource information and produces the fingerprints. After each of these executions, Sahara also queries the user about whether the upgrade has succeeded or failed. We ask for this success/failure flag, because it may be difficult to determine failure in some cases. For example, a software misbehavior is considered a failure, even if it does not cause a crash or any other OS-visible event. In addition, the upgrade may cause another software to misbehave [9].
When the user provides a succeed/fail flag, Sahara sends this information, along with the environment resource fingerprints, back to the developer’s site. This data represents the profile of the corresponding user site. After the first several executions, Sahara turns its data collection off to minimize overheads. User profiles from all sites serve as the input to the feature selection step. Section III systematically studies the impact of user profiles with various characteristics.
**Feature selection (step 4).** Based on the information received from the user sites, this step selects environment resources (called features) with the strongest correlation to the observed upgrade failures. The fingerprints are never “unhashed” during feature selection (or after it); it is enough for Sahara to know how many different fingerprints there are for each feature.
Sahara uses the decision tree algorithm with feature ranking from the WEKA tool [www.cs.waikato.ac.nz/ml/weka/] for selection. The algorithm builds a decision tree by first selecting a feature to place at the root node, and creating a tree branch for each possible value of the feature. This splits up the dataset into subsets, one for each value of the feature. The choice of the root feature is based on Gain Ratio [30], a measure of a feature’s ability to create subsets with homogeneous classes. In Sahara, there are only two classes: success or failure. The Gain Ratio is higher for the features that create subsets with mostly success or mostly failure user profiles. For instance, in the example of Fig 1, the root feature would be the SHELL environment variable. The subsets that include SHELL strings of length different than 9 are successes, whereas those that have strings of exactly 9 characters are failures.
After selecting the root feature, the process is repeated recursively for each branch, using only those profiles that actually reach the branch. When all the profiles at a node have the same classification, the algorithm has completed that part of the tree. The output of the algorithm is a set of features, their Gain Ratios, and their ranks.
To validate the feature selection, Sahara uses 10-fold cross-validation [16] to compute the standard deviation of the ranks of each feature. When the standard deviations of the top-ranked features are high, Sahara warns the developer that its results are not to be trusted, i.e. the reason for the failures is unlikely to be the environment. When this condition is not met, Sahara considers all the features that have Gain Ratios within 30% of the highest ranked feature as Suspect Environment Resources (SERs). These SERs serve as input to the static analysis step. We assess the impact of the accuracy of the feature selection step in Section III.
**Static analysis and suspect routines (step 5).** Sahara analyzes the upgraded software using the C Intermediate Language (CIL) [24]. Specifically, it implements two CIL modules, the call-graph module and the def-use module. As the name suggests, the call-graph module computes a whole-program static call graph by traversing all the source files, a routine at a time. Every node in the call graph is a routine, and its children nodes are the routines it calls. The root of the call graph is always the main() routine.
The def-use module creates def-use chains [1] for each SER. A def-use chain links all the variables that derive directly or indirectly from one SER. Each array is handled as a single variable, whereas struct and union fields are handled separately. Figure 3 shows the def-use chain (thin arrows) for our example program.
To find suspect routines, Sahara traverses all the routines in the order they appear in the call graph, starting with the root. During the course of the traversal, Sahara maintains three lists: (1) a list of global suspect variables (SuspectVars); (2) a list of per-routine suspect variables (LsuspectVars); and (3) a list of routines that are suspect (SuspectRoutines). SuspectVars is initialized with the variables corresponding to SERs.
Sahara analyzes each routine statement-by-statement, starting with the root routine. For every variable access, it checks whether the variable is a suspect or depends on any suspect, either directly or indirectly. If so, the accessed variable becomes a suspect. If it is a local variable, it is added to LsuspectVars of the routine where the access appears; otherwise, it is added to SuspectVars. The routine containing the access is added to SuspectRoutines. In addition, if a routine calls another with a suspect variable as a parameter, the caller is added to SuspectRoutines and the corresponding formal parameter is added to the LsuspectVars of the callee. The callee becomes a suspect if the suspect parameter is used in the function, and not otherwise. Furthermore, a routine becomes suspect if the return value of any of its callees is suspect, and it is used in the routine. Similarly, a routine becomes suspect if any parameter passed by reference to one of its callees becomes suspect, and it is used in the routine. This step outputs SuspectRoutines (SRs), after the entire graph has been traversed.
This step produces a set of routines that are highly correlated with the failures. For the example in Fig 1, main and checklength are the two suspect routines. The block arrows in Figure 3 show why these routines were included as suspects.
Creating and distributing instrumented versions (step 6).
After the SRs are identified, Sahara generates the instrumented versions of the current and upgraded versions of the software.
Sahara uses CIL to automatically instrument the application. The instrumentation is introduced by two new CIL modules, instrument-calls and ptr-analysis. The instrument-calls module inserts calls to our C runtime library to log routine signatures for all the routines executed in a particular run. A routine’s signature consists of the number, name, and values of its parameters, its return value, and any global state that is accessed by the routine. The global state comprises the number, name, and values of all the global variables accessed by the routine. This module works well for logging parameters of basic data types. However, in order to correctly log pointer variables and variables of complex data types, we have implemented the ptr-analysis module. This module inserts additional calls to our C library to track all heap allocations and deallocations.
Re-execution, value spectra analysis, and deviated routines (steps 7-9).
As we do not want to transfer inputs or large logs across the network, these steps are performed at the failed users’ sites themselves. To do so, Sahara first deploys infrastructure to those sites that is responsible for re-execution and dynamic analysis. It then transfers the instrumented binaries of the current and upgraded versions.
Sahara leverages Mirage’s re-execution infrastructure, which has been detailed in [9]. This infrastructure executes the instrumented binaries of both versions at the failed user sites, feeding them the same inputs that had caused the upgrade to fail. These inputs were collected in the logs recorded during step 2. To allow for some level of non-determinism during re-execution, Sahara maps the recorded inputs to the appropriate input operations (identified by their system calls and thread ids), even if they are executed in a different order in the log.
As the instrumented versions execute, their dynamic routine call/return information is collected. Fig 4 shows the log for the two versions. Since the logs of the two versions are mostly same (except for lines 8 and 15), only the lines that are different between the two versions are duplicated.
With these logs, Sahara determines the set of routines, called DeviatedRoutines (DRs), whose behavior has deviated after the upgrade. Specifically, we implement fDiff, a tool that converts each of log into a sequence of routine signatures, and uses the longest common subsequence algorithm to compute the difference between the sequences. fDiff is similar to Unix’s diff, but produces more concise output as it understands the call/return structure of our logs. A routine has deviated, if the following differs between the two versions: (1) its number of arguments; (2) the value of any of its arguments; (3) its return value; (4) the number of global variables it accesses; or (5) the value of one or more global variables it accesses. This notion of deviation is similar to that of value spectra [36]. Wilde and Scully [34] also compare execution logs.
In Fig 4, two routines have deviated: checklength has deviated in its return value (line 8), whereas secondfunction has deviated in its argument (line 13).
Sahara transfers the DRs list to the developer’s site.
**Intersection and list of primary suspects (step 10).** Finally, Sahara computes the union of the DRs from the failed user sites. It then intersects this larger set with the SRs, thereby eliminating benign deviations that have nothing to do with the failure. The intersection forms the set of *Prime Suspect Routines (PSRs)*, i.e. the routines most likely to contain the root cause of the failure. For the example, `checklength` is the prime suspect, despite the fact that all 3 routines have some relationship to the users’ environment. The root cause is indeed `checklength`.
**C. Discussion**
**Sahara and other systems.** Sahara simplifies the debugging of upgrades that fail due to the user environment. As such, Sahara is less comprehensive than systems that seek to identify more classes of software bugs (e.g., [32]). However, Sahara takes advantage of its narrower scope to guide failed upgrade debugging more directly towards environment-related bugs (which are the most common in practice [9]).
In essence, we see Sahara as complementary to other systems. In fact, an example combination of systems is the following. Steps 1–4 of Sahara would be executed first. If the user environment is likely the culprit (as determined by the output of step 4), the other steps are executed. Otherwise, another system is activated.
**Dealing with multiple bugs.** The feature selection algorithm is the only part of Sahara that could be negatively affected by an upgrade with multiple bugs. The other components of Sahara are unaffected because (1) information about each execution (the resource fingerprints and a success/failure flag) represents at most one bug, (2) static analysis is independent of the number of bugs, (3) each dynamic analysis finds deviations associated with a single bug, and (4) the union-intersection step is independent of the number of bugs.
Sahara is effective when faced with multiple bugs, even when feature selection does not produce the ideal results. To understand this, consider the two possible scenarios: (1) all bugs are environment-related; and (2) one or more bugs are unrelated to the environment.
When all bugs are environment-related and involve the same environment resources, feature selection works correctly and Sahara easily produces the prime suspects for all bugs. If different bugs relate to different sets of environment resources, feature selection could misbehave. In particular, if there is not enough information about all bugs, feature selection could misrank the environment resources that are relevant to the less frequent bugs to the point that they do not become SERs. This would cause the remaining steps to eventually produce the prime suspects for the more frequent bugs only. After those bugs are removed, Sahara can be run again to tackle the less frequent bugs. This second time, feature selection would rank the environment resources of the remaining bugs more highly. Other systems rely on similar multi-round approaches for dealing with multiple bugs, e.g. [12].
When one or more bugs are not related to the environment, feature selection could again misbehave if there is not enough information about the bugs that are environment-related. This scenario would most likely cause feature selection to low-rank all environment resources. In this case, the best approach is to resort to a different system, as discussed above. In contrast, if there is enough information about the environment-related bugs, feature selection would select the proper SERs. Despite this good behavior, the dynamic analysis at some failed sites would identify DRs corresponding to bugs that are not related to the environment. However, those routines would not intersect with those from the static analysis, leading to the proper prime suspect results.
**Limitations of Sahara’s current implementation.** Sahara currently implements simple versions of its components. As a proof-of-concept, the goal of this initial implementation is simply to demonstrate how to combine different techniques in a useful and novel way. However, as we discuss below, more sophisticated components can easily replace the existing ones.
Sahara limits the user information transferred to the developer’s site to the resource fingerprints. In our current implementation, the fingerprints are transferred in hashed form (SHA-1), which does not provide foolproof privacy guarantees. However, Sahara can easily use more sophisticated schemes for these transfers. Regardless of the privacy scheme, the bandwidth required by these transfers (and that of the DRs) should be negligible. Sahara requires more bandwidth for transferring the re-execution and value spectra infrastructures, but only for failed user sites.
Sahara employs static and dynamic analyses to narrow the set of routines that are likely to contain the root cause of the failure. However, under certain conditions, these analyses may be unable to do so. In the worst case, all routines may be affected by the SERs, making static analysis ineffective. Similarly, all routines could be found to deviate from their original behaviors. Fortunately, these worst-case scenarios are extremely unlikely in a single upgrade.
Execution replay at the failed sites is currently performed without virtualization. Using virtual machines would enable us to automatically handle applications that have side-effects, but at the cost of becoming more intrusive and transferring more data to the failed sites. Sahara can be extended to use replay virtualization. On the positive side, Sahara performs a single replay at a failed site, which is significantly more efficient than the many replays of techniques such as delta debugging [39].
Our current approach for handling replay non-determinism is very simple: Sahara tries to match the recorded inputs to their original system calls when re-executing each version of the application. Internal non-determinism (e.g., due to random numbers or race conditions) is currently not handled and may mislead the dynamic analysis if it changes: the number or value of the arguments passed to any routines, the number or value of the global variables they touch, or their return values. Sahara can be combined with existing deterministic replay systems to eliminate these problems.
Finally, Sahara guides the debugging process by pinpointing a set of routines to debug first. Pinpointing a single routine or a single line causing the failure may not even be possible, since the root cause of the failure may span multiple lines and routines. Moreover, the systems that attempt such pinpointing
III. EVALUATION
In this section, we describe our methodology and evaluate Sahara by analyzing three real bugs in OpenSSH, a synthetic bug in SQLite, and a synthetic bug in uServer.
We chose OpenSSH because it is widely deployed in diverse user environments. Its upgrades are fairly frequent, typically once every 3–6 months [26]. OpenSSH comprises many components: (1) sshd, the daemon that listens for connections coming from clients; (2) ssh, the client that logs and executes commands on a remote machine; (3) scp, the program to copy files between hosts; (4) sftp, an interactive file transfer program atop the SSH transport; and (5) utilities such as ssh-add, ssh-agent, ssh-keysign, ssh-keyscan, ssh-keygen, and sftp-server. In all, OpenSSH has around 400 distinct files and 50–70K lines of code (LOC).
SQLite is the most widely deployed SQL database [31]. It implements a serverless, transactional SQL engine. SQLite has 67K LOC spread across 4 files. uServer [7] is an open-source, event-driven Web server sometimes used for performance studies. It has 37K LOC spread across 161 files.
A. Methodology
OpenSSH: Port forwarding bug. Port forwarding is commonly used to create a SSH tunnel. To setup a tunnel, one forwards a specified local port to a port on the remote machine. SSH tunnels provide a means to bypass firewalls, so long as the site allows outgoing connections. The bug [5] was a regression bug in OpenSSH version 4.7. When using SSH port forwarding for large transfers, the transfer aborts. Some users observed the following buffer error:
```
buffer_get_string: bad string length 557056
buffer_get_string: buffer error
```
These transfers executed successfully until version 4.6, but the behavior changed after upgrading to version 4.7. The failure was observed at a small subset of user sites. The abort was not reproducible at the developer’s site, so the developer needed volunteer users to reproduce the bug and test its fix.
A correct and complete fix was submitted and tested by the users on the second attempt after almost three months from the time it was submitted [5].
The failure was caused by the following issues: (a) the users had enabled port forwarding in the ssh configuration file; (b) change in default window size from 128KB to 2MB in the ssh client code in version 4.7; (c) port forwarding code advertising the default window size as the default packet size; and (d) the maximum packet size set to 256KB in sshd. Given these characteristics, when users issued large transfers through the ssh tunnel, some of the packets had size larger than the daemon’s maximum, resulting in the buffer error after the upgrade. The port forwarding code using the default window size as the default packet size was not an issue before the upgrade, as the size was always below the maximum.
OpenSSH: X11 forwarding bug. This bug [4] manifested when users upgraded to OpenSSH version 4.2p1 from 4.1p1 and tried to start X11 forwarding. The following error was observed at the sites that had SSH forwarding support enabled and the command was executed in the background:
```
xterm Xt error: Can't open display: localhost:10.0
```
In version 4.2p1, developers modified the X11 forwarding code to fix some X11 channel leaks, including destroying the X11 sessions whose session has ended. As a result, when the X11 forwarding process is started in the background, the child (and the channel) starting it would exit immediately. It took the developers more than two weeks to fix this bug [4].
OpenSSH: ProxyCommand bug. The ProxyCommand option specifies the command that will be used by the SSH client to connect to the remote server. The bug [28] was a regression in OpenSSH version 4.9; ssh with ProxyCommand would fail for some users with a "No such file" error.
Until version 4.7, ProxyCommand would use /bin/sh to execute the command. However, in version 4.9, the code changed to use the $SHELL environment variable, causing the command to fail at user sites where $SHELL was set to an empty string. The developers fixed this bug in one week, after one user had already done a large amount of debugging [28].
SQLite and uServer bugs. To demonstrate Sahara’s generality, we synthetically created one buggy upgrade for SQLite version 3.6.14.2 and one for uServer version 0.6.0. Note that these two bugs are trivial and could be identified by simpler tools than Sahara. However, our goal is simply to demonstrate that Sahara works without modification for a variety of applications.
Before the upgrade of SQLite, the option echo on caused its shell to output each command before executing it. After our synthetic upgrade, it does not output the command when executing in interactive mode. The bug we inject into the upgrade of uServer is not environment-related. The bug is a typo in the function that parses user input causing dropped requests and occasional crashes.
We do not present complete results for the ProxyCommand, SQLite, or uServer bugs due to space limitations. However, we do include a summary of their results in the next subsection.
Upgrade deployment. To simulate a real-world deployment of a software upgrade to users with varied environment settings, we collected environment data from 87 machines at our site across two clusters. The settings of the machines within a cluster are similar, but differ across clusters.
We used the methodology described in Section II-B to identify the environmental resources in OpenSSH, SQLite, and uServer. Sahara uses the following parsers to parse and fingerprint the environmental resources: CHUNKS and CHUNKS2 chunk and fingerprint the binary files, such as the kernel symbols; KEYVAL parses and chunks any file in the key-delimiter-value format, such as shell environment or cpu data; LIBS chunks and fingerprints all the libraries; LINES parses and fingerprints a file one line at a time, such as the file containing the list of kernel modules; and SSH and SSHD are application-specific parsers to parse and fingerprint the ssh_config and sshd_config configuration files, respectively.
It took us only 8 person-hours to implement these parsers. SQLite and uServer did not require any application-specific parsers. The environmental resources of a single machine, parsed/chunked and fingerprinted, along with the success/failure flag constitute a single user profile.
By default, our experiments assume that 20 profiles include environment settings that can activate a bug, whereas 67 of them do not. We study the impact of this parameter below.
**User site environments.** To evaluate Sahara’s behavior in the face of the uncertainties that may occur in practice, we perform six types of experiments: random perfect (rand_p), two random imperfect (rand_i60 and rand_i20), real configuration perfect (real_p), and two real configuration imperfect (real_i60 and real_i20). In the rand_p experiment, the values of all the environment resources related to the application are chosen at random, except for the resources that relate directly to the bug. Moreover, the 20 profiles with environment settings that can activate the bug are classified as failed profiles, whereas the other 67 are classified as successful ones. As a result, there is 100% correlation between those resources and the failure. This is the best case for feature selection in Sahara, as it finds the minimum set of SERs.
In the two rand_i cases, the environment settings are the same as in the rand_p case. However, not all profiles with environment settings that cause the failure are labeled as failures. In particular, only 60% of these profiles are labeled failures in the rand_i60 case, and only 20% in the rand_i20 case. These imperfect experiments mimic the situation where some users simply have not activated the bug yet, possibly because they have not exercised the part of the code that uses the problematic settings. These scenarios may lead feature selection to pick more SERs than in the rand_p case.
In the three types of experiments above, the application-related environment includes random values. For more realistic scenarios, we downloaded eight different complete OpenSSH configuration files from the Web. For each of the bugs, we modify three of these files to include the settings that activate the bug. One of these eight configuration files (three with problematic settings and five with only good settings) is assigned to each of the 87 user profiles randomly, but in the same proportion as before: 20 users should get problematic settings and 67 should not. In the real_p case, all the 20 profiles with problematic settings are labeled as failures, whereas the 67 others are labeled as successful. In the real_i60 and real_i20 experiments, only 60% and 20% of the profiles with these settings are labeled as failures, respectively. The real configurations are likely to lead to more SERs than the random ones. We do not study real configurations for SQLite and uServer because we inject synthetic bugs into them.
In all of our experiments, we consider the features ranked within 30% of the highest ranked feature as suspects. In addition, we use inputs that we know will activate the bugs.
**B. Results**
**OpenSSH: Port forwarding bug.** Recall that this bug was introduced in the ssh code by version 4.7. This version has 58K LOC and 1529 routines (729 routines in ssh). The diff between versions 4.6 and 4.7 comprises approximately 400 LOC and 65 routines. Sahara identified 101 environmental resources, including the parameters in the configuration files, the operating system and library dependencies, hardware data, and other relevant files. Many of these resources, such as library files, are split into smaller chunks; for others, such as configuration files, each parameter is considered a separate feature. Overall, there are 325 features, forming the input to the feature selection step.
Table 1 shows the results for each of the analyses in Sahara and all techniques combined for every experiment. The feature selection step results in merely 1 feature chosen as suspect in the rand_p, rand_i60, and rand_i20 cases. In these experiments, the environment resource that is actually determinant in the failures, configuration parameter Tunnel, was the only suspect because the other environmental resources were assigned random values in all user profiles. This resulted in a very high correlation between the failure and this resource, even in the real_imperfect cases. Tunnel corresponds to 4 suspect variables in ssh.
In contrast, in the real_p, real_i60 and real_i20 experiments, 3 features are selected: configuration parameters Tunnel, BatchMode, and RSAAuthentication. Features BatchMode and RSAAuthentication have 3 possible values: yes, no, or missing. In the real configurations we collected, it so happened that RSAAuthentication was set to yes, and BatchMode to no in two of the three failed profiles, causing them to be highly correlated with the failure. Recall that we did not assign these values; we retrieved the configurations from the Web and changed only the setting of the Tunnel parameter. These three parameters correspond to 8 suspect variables in ssh.
The static analysis results in 12 suspect routines in the random cases, and 22 in the real cases. The 12 routines comprise those that (1) read the configuration file and initialize the environment of the ssh client; (2) create, enable, or disable a tunnel; (3) place the tunnel data into a buffer or a packet; and (4) enable the port forwarding over this tunnel and create a channel for it. Routine channel_open from the latter group contains the root cause of this failure.
In the real cases, the same 12 routines are suspect, in addition to those affected by RSAAuthentication. BatchMode is used only during the initialization in ssh, so it does not produce other suspects.
### Table 1
<table>
<thead>
<tr>
<th>Bug</th>
<th>Experiment</th>
<th>diff</th>
<th>SERs</th>
<th>SRs</th>
<th>DRs</th>
<th>PSRs</th>
</tr>
</thead>
<tbody>
<tr>
<td>Port</td>
<td>rand_p</td>
<td>65</td>
<td>1</td>
<td>12</td>
<td>124</td>
<td>6</td>
</tr>
<tr>
<td>Port</td>
<td>rand_i60</td>
<td>65</td>
<td>1</td>
<td>12</td>
<td>124</td>
<td>6</td>
</tr>
<tr>
<td>Port</td>
<td>rand_i20</td>
<td>65</td>
<td>1</td>
<td>12</td>
<td>124</td>
<td>6</td>
</tr>
<tr>
<td>Port</td>
<td>real_p</td>
<td>65</td>
<td>3</td>
<td>22</td>
<td>124</td>
<td>7</td>
</tr>
<tr>
<td>Port</td>
<td>real_i60</td>
<td>65</td>
<td>3</td>
<td>22</td>
<td>124</td>
<td>7</td>
</tr>
<tr>
<td>Port</td>
<td>real_i20</td>
<td>65</td>
<td>3</td>
<td>22</td>
<td>124</td>
<td>7</td>
</tr>
<tr>
<td>X11</td>
<td>rand_p</td>
<td>137</td>
<td>1</td>
<td>18</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td>X11</td>
<td>rand_i60</td>
<td>137</td>
<td>1</td>
<td>18</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td>X11</td>
<td>rand_i20</td>
<td>137</td>
<td>1</td>
<td>18</td>
<td>157</td>
<td>6</td>
</tr>
<tr>
<td>X11</td>
<td>real_p</td>
<td>137</td>
<td>3</td>
<td>22</td>
<td>157</td>
<td>7</td>
</tr>
<tr>
<td>X11</td>
<td>real_i60</td>
<td>137</td>
<td>3</td>
<td>20</td>
<td>157</td>
<td>7</td>
</tr>
<tr>
<td>X11</td>
<td>real_i20</td>
<td>137</td>
<td>3</td>
<td>20</td>
<td>157</td>
<td>7</td>
</tr>
</tbody>
</table>
The dynamic analysis identifies 124 routines whose behavior has deviated when going from version 4.6 to 4.7. Note that the number of deviations is higher than the number of routines that actually changed. The reason is that the command succeeds before the upgrade and many more routines are invoked, as compared to after the upgrade when the command fails. In our fDiff implementation, the routines that were not called after the upgrade are considered deviations.
The intersection of SRs and DRs is only 6 routines in the random cases and 7 routines in the real cases. In the random cases, the four routines pertaining to reading the configuration file and setting up the environment, and two routines pertaining to enabling or disabling the tunnel, were pruned out after intersection; their behavior did not change after the upgrade. In the real perfect case, confirm was the additional routine identified as primary suspect. The 6 or 7 primary suspects reported by Sahara include the actual culprit (routine channel_new).
From the top six rows in Table I, we can see that the number of primary suspects output by Sahara is 2x–3x lower than that by static analysis, 17x–20x lower than that by dynamic analysis, and 9x–10x lower than the number of routines that were modified in the upgrade. Furthermore, we can see that Sahara is resilient to users that do not report their upgrades to have failed despite having problematic settings for the environment resources that cause the failure.
**OpenSSH: X11 forwarding bug.** Recall that the X11 forwarding bug affected the sshd program of OpenSSH version 4.2. This version has 52K LOC and 1439 routines (856 routines in sshd). The diff between versions 4.1 and 4.2 is approximately 900 LOC and 137 routines. Sahara identified 123 environmental resources, resulting in 354 features.
The bottom-half of Table I presents the results. The feature selection step again results in 1 feature chosen as suspect in the rand_p, rand_i60, and rand_j20 cases. This feature is exactly the environment resource that is directly related to the bug: configuration parameter X11Forwarding. It corresponds to 3 variables in the sshd code.
In the real_p experiment, Sahara selects 3 features: configuration parameters X11Forwarding, AuthorizedKeysFile, and ChallengeResponseAuthentication. In the real_i60 and real_j20 cases, Sahara also selects 3 features: configuration parameters X11Forwarding, AuthorizedKeysFile, and PidFile. AuthorizedKeysFile and PidFile were assigned the default value in two out of the three failed real user profiles, whereas ChallengeResponseAuthentication was set to no value in two of them. These 4 features correspond to 7 actual variables in sshd.
The static analysis results in 18 suspect routines in the rand_p and rand_i cases, 21 in real_p, and 20 in the real_j cases. The 18 routines comprise those that: (1) read the configuration file and initialize the environment of sshd; (2) authenticate the incoming client connection with the options specified and setup the connection; (3) start a packet for X11 forwarding; and (4) setup X11 forwarding, create the channel, process X11 requests, and do the cleanup. Routine session_setup_x11fwd from the latter group is the culprit.
In the real configuration cases, all the 18 routines mentioned above are suspect, in addition to those affected by AuthorizedKeysFile and ChallengeResponseAuthentication. PidFile did not result in additional suspect routines, because it is used once in the initialization to store the pid of sshd, and never again. As a result, the real_p case has 1 more routine reported as suspect than the two real_i cases.
The dynamic analysis identifies 157 routines whose behavior has deviated when going from version 4.1 to 4.2. Again, the number of deviations is higher than the number of modified routines, because the upgraded code fails much earlier than the original one.
The intersection of the two analyses results in only 6 routines in the random case, and 7 in the real configuration cases. 3 of the 6 (or 7) primary suspect routines are key to understanding the failure. However, the single modification in the upgrade that directly causes the failure is in the session_setup_x11fwd routine.
From these results, we can see that the number of primary suspects found by Sahara is at least 3x lower than when using static analysis alone, at least 20x lower than when using dynamic analysis alone, and 15x lower than the number of routines that were actually modified. Again, these results illustrate Sahara’s ability to focus the debugging of failed upgrades on a small number of routines, even when many users do not experience failures despite having environment resources that could trigger bugs in the upgrade.
**Impact of number of profiles with failure-inducing settings.** So far, we have studied the impact of imperfections in the categorization of success/failure of the upgrades on the behavior of Sahara. Another key factor for the effectiveness of feature selection is the percentage of user profiles that actually include the environment resource settings that cause the upgrade failures. On one hand, the lower this percentage, the less information we have about the failures and, thus, the worse the feature selection results should be. On the other hand, lowering this percentage reduces noise (i.e., supporting evidence for resources that are not related to the failures) in the dataset and may lead to better selection results. To confirm these observations, we performed some experiments in which we varied the number of such profiles. In particular, we considered cases in which 30 or 10 profiles (out of 87) had the failure-inducing settings. Recall that our default results above assumed 20 such profiles.
Table II presents the “perfect” results from these experiments. The default results (rand_p and real_p) and the dynamic analysis results are included for clarity. As expected, the number of SERs (as well as suspect routines and primary suspects) tends to increase when we lower the number of profiles with failure-inducing settings. Interestingly, the real configuration results for the X11 forwarding bug show that lowering noise (going from real_p to real_10) can indeed improve results as well.
**Impact of feature selection accuracy.** Our longer technical report [3] also includes a study of the impact of feature selection accuracy on Sahara. In short, these results illustrate the behavior we expected: the less accurate feature selection
is, the more prime suspects Sahara finds. Defining a few more SERs than necessary does not increase the number of prime suspects excessively (roughly by 2x at most, in comparison to our default results). However, adding too many unnecessary SERs can increase the number of PSRs by 6x–7x.
**OpenSSH: ProxyCommand bug.** This bug affected ssh in version 4.9, which comprises 58K LOC and 1535 routines (712 routines in ssh). The upgrade to this version modified 122 routines. We performed the same 10 experiments with this upgrade as above. Depending on the type of experiment, feature selection produces 2–5 SERs and static analysis produces 10–29 suspect routines. Dynamic analysis produces 284 deviated routines. In contrast, Sahara outputs 7 or 11 PSRs in all but one experiment (real\_10, for which it recommends 21 routines). Overall, Sahara improves on static analysis by 1.4x and on dynamic analysis by 14x–40x for this bug.
**SQLite bug.** We injected this bug in SQLite version 3.6.14.2, which comprises 67K LOC and 1338 routines. The upgrade modified two routines. We ran only the random family of experiments, since this was not a real upgrade bug. These results show that feature selection identified 2–3 SERs, static analysis produces 10–29 suspect routines. Dynamic analysis produces 284 deviated routines. In contrast, Sahara outputs 7 or 11 PSRs in all but one experiment (real\_10, for which it recommends 21 routines). Overall, Sahara improves on static analysis by 1.4x and on dynamic analysis by 14x–40x for this bug.
**uServer bug.** We injected this bug in uServer version 0.6.0, which comprises 37K LOC and 404 routines. The upgrade modified 10 routines. Again, we ran only the random family of experiments, since this was not a real upgrade bug. The experiments stopped at the feature selection step, since the ranks of the top-ranked features consistently exhibit high standard deviations. Thus, feature selection properly flags this bug as unrelated to the environment.
**Summary.** The Sahara results for the five bugs and the different imperfections we studied suggest that our system may significantly reduce the time and effort required to diagnose the root cause of upgrade failures.
### IV. Related Work
**A. Upgrade Deployment and Testing**
A few studies [9], [21], [22] have proposed automated upgrade deployment and testing techniques. McCamant and Ernst [21], [22] automatically identify incompatibilities when upgrading a component in a multi-component system. However, they did not attempt to isolate the root cause of the incompatibilities. Similarly, Crameri *et al.* [9] did not seek to determine the root cause of upgrade failures.
**B. Automated Debugging**
**Troubleshooting misconfigurations.** PeerPressure [33], Snitch [23], and ConfAid [2] seek to identify the root cause of software misconfigurations. These systems assume that the software is correct, but was misconfigured by users. Sahara is fundamentally different; it helps find upgrade bugs triggered by proper configurations and environments. Moreover, Sahara goes well beyond finding the environment resources most likely to be related to a bug (i.e., feature selection).
Qin *et al.* [29] observe that many bugs are correlated with the “execution environment” (which they define to include configurations and the behavior of the operating and runtime systems). Based on this observation, they propose Rx, a system that tries to survive bugs at run time by dynamically changing the execution environment. A follow-up to Rx, Triage [32] goes further by dynamically changing the execution environment while attempting to diagnose failures at users’ sites.
Sahara focuses on upgrade bugs or misbehavior, rather than software bugs in general as Rx and Triage do. For this reason, Sahara can be much more specific about which variables and routines should be considered first during debugging. Moreover, Sahara can handle bugs due to aspects of the environment that would be difficult (or impossible) to change without semantic knowledge of the application. Finally, Rx and Triage do not leverage data from many users, machine learning, or static analysis. Using any of these features could speed up Triage’s diagnosis. In fact, as we argue in Section II-C, Sahara is complementary to systems like Triage.
**Statistical debugging with user site feedback.** Several previous papers [8], [12], [18], [19], [20], [27], [39] rely on low-overhead, privacy-preserving instrumentation infrastructures to provide user execution data back to developers. These works do not consider the users’ environment, and require users to constantly run instrumented code and send feedback back to the developers, both of which have overheads.
Sahara also relies on information gathered at user sites, but the data collection only lasts temporarily to lower overheads. In addition, Sahara restricts its statistical analysis (feature selection) to the aspects of the environment that may have caused an upgrade to misbehave. Finally, Sahara goes further by relating the results of the analysis to the variables and routines that most likely caused the misbehavior.
**Delta debugging.** Delta debugging aims to resolve regression faults automatically and effectively. Several studies [8], [15], [39] have focused on comparing program states of failed and successful runs to identify the space of variables or rank program statements that are correlated with the failure.
Sahara’s dynamic analysis also considers the differences between two runs of a program. However, our approach is driven by environment resources and combines information from a collection of users, machine learning, static analysis, and dynamic analysis. Furthermore, unlike delta debugging, Sahara requires neither instrumenting the production code nor replaying the execution multiple times at the users’ sites.
**Dynamic behavior deviations.** Xie and Notkin [36] proposed program spectra to compare versions and get insights into their internal behavior. Harrold et al. [14] found that the deviations between spectra of two versions frequently correlate with regression faults.
Sahara uses value spectra to compare the execution call traces from before and after the upgrade is applied. However, merely identifying the deviations in the upgraded version leads to a large number of candidates for exploration, as our experiments demonstrate. The same is likely to occur for most large applications or major upgrades. Sahara further narrows down the deviation sources by cross-referencing them with suspect routines found through information from users, machine learning, and static analysis.
In [25], [38], the authors propose a search algorithm to isolate the fault-inducing change after a regression test fails at the developer’s site. In contrast, Sahara assumes that the upgrade has been tested thoroughly at the developer’s site and is deployed after all tests have passed. Sahara helps isolate the fault-inducing code that is affected by specific user environments. These failures are not easily reproducible at the developer’s site because of environmental differences.
**Other approaches.** Researchers have actively been considering other approaches to automated debugging, e.g. [6], [10], [11], [13], [37]. Sahara is not closely related to any of these approaches, except peripherally for its use of static (def-use) or dynamic analysis. However, Sahara’s use of static and dynamic analyses differs in a major way from most other approaches: it does not use them to find the bugs themselves; rather, it uses them to constrain the set of routines of interest.
V. **Conclusion**
In this paper, we sought to reduce the effort developers must spend to debug failed upgrades. We proposed Sahara, a system that prioritizes the set of routines to consider when debugging. Driven by the fact that most upgrade failures result from differences between the developers’ and users’ environments, Sahara combines information from user site executions and environments, machine learning, and static and dynamic analyses. We evaluated our system for five bugs in three widely used applications. Our results showed that Sahara produces accurate recommendations with only a small set of routines. Importantly, the set of recommended routines remains small and accurate, even when the user site information is misleading or limited.
**REFERENCES**
[9] CREAMER, O., ET AL.
|
{"Source-Url": "http://kth.diva-portal.org/smash/get/diva2:727673/FULLTEXT01", "len_cl100k_base": 12114, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 37088, "total-output-tokens": 14148, "length": "2e13", "weborganizer": {"__label__adult": 0.00023937225341796875, "__label__art_design": 0.00021028518676757812, "__label__crime_law": 0.00018787384033203125, "__label__education_jobs": 0.0006680488586425781, "__label__entertainment": 5.012750625610352e-05, "__label__fashion_beauty": 0.00010412931442260742, "__label__finance_business": 0.00016701221466064453, "__label__food_dining": 0.0001962184906005859, "__label__games": 0.0004453659057617187, "__label__hardware": 0.0006313323974609375, "__label__health": 0.00022602081298828125, "__label__history": 0.00014901161193847656, "__label__home_hobbies": 6.103515625e-05, "__label__industrial": 0.0001990795135498047, "__label__literature": 0.00018107891082763672, "__label__politics": 0.00012755393981933594, "__label__religion": 0.00024020671844482425, "__label__science_tech": 0.0082855224609375, "__label__social_life": 6.526708602905273e-05, "__label__software": 0.00823974609375, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0001825094223022461, "__label__transportation": 0.0002498626708984375, "__label__travel": 0.00013339519500732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61375, 0.0361]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61375, 0.34804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61375, 0.92072]], "google_gemma-3-12b-it_contains_pii": [[0, 547, false], [547, 6599, null], [6599, 10926, null], [10926, 17766, null], [17766, 22950, null], [22950, 29566, null], [29566, 35651, null], [35651, 42075, null], [42075, 48601, null], [48601, 54063, null], [54063, 61375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 547, true], [547, 6599, null], [6599, 10926, null], [10926, 17766, null], [17766, 22950, null], [22950, 29566, null], [29566, 35651, null], [35651, 42075, null], [42075, 48601, null], [48601, 54063, null], [54063, 61375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61375, null]], "pdf_page_numbers": [[0, 547, 1], [547, 6599, 2], [6599, 10926, 3], [10926, 17766, 4], [17766, 22950, 5], [22950, 29566, 6], [29566, 35651, 7], [35651, 42075, 8], [42075, 48601, 9], [48601, 54063, 10], [54063, 61375, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61375, 0.06931]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
13332f7b653e7e300f49219f39844c81a0e3c38e
|
Automatic Detection of Performance Bugs in Database Systems using Equivalent Queries
Xinyu Liu†, Qi Zhou‡, Joy Arulraj*†, Alessandro Orso*‡
*Georgia Institute of Technology, Atlanta, GA, USA; liuxy@gatech.edu, arulraj@gatech.edu, orso@cc.gatech.edu
†Meta, Seattle, WA, USA; zhouqi@fb.com
Abstract
Because modern data-intensive applications rely heavily on database systems (DBMSs), developers extensively test these systems to eliminate bugs that negatively affect functionality. Besides functional bugs, however, there is another important class of faults that negatively affect the response time of a DBMS, known as performance bugs. Despite their potential impact on end-user experience, performance bugs have received considerably less attention than functional bugs. To fill this gap, we present AMOeba, a technique and tool for automatically detecting performance bugs in DBMSs. The core idea behind AMOeba is to construct semantically equivalent query pairs, run both queries on the DBMS under test, and compare their response time. If the queries exhibit significantly different response times, that indicates the possible presence of a performance bug in the DBMS. To construct equivalent queries, we propose to use a set of structure and expression mutation rules especially targeted at uncovering performance bugs. We also introduce feedback mechanisms for improving the effectiveness and efficiency of the approach. We evaluate AMOeba on two widely-used DBMSs, namely PostgreSQL and CockroachDB, with promising results: AMOeba has so far discovered 39 potential performance bugs, among which developers have already confirmed 6 bugs and fixed 5 bugs.
CCS Concepts
• Software and its engineering → Maintaining software; Software verification and validation; • Information systems → Query optimization.
Keywords
Differential testing, database testing, query optimization.
ACM Reference Format:
1 Introduction
Database management systems (DBMSs) play a critical role in modern data-intensive applications [17, 33]. For this reason, developers extensively test these systems to improve their reliability and accuracy. For instance, they leverage tools such as SQLSMITH [4] and SQLancer [37–39] to discover crash-inducing or logic bugs in DBMSs. However, the same level of scrutiny has not been applied to performance bugs—bugs that affect the time taken by the DBMS to process certain queries. Detecting performance bugs is just as crucial as detecting functional bugs, as delayed responses from the DBMS can dramatically affect the user experience [32, 44].
Challenges. To retrieve the results for a given SQL query, the DBMS invokes a pipeline of complex components (e.g., query optimizer, execution engine) [22, 34]. The overall performance of the DBMS may be reduced due to sub-optimal decisions taken by any of these components and the complex interactions among them [8, 9]. Therefore, performance testing on individual components of the DBMS is in general insufficient to detect performance bugs [21, 24, 29, 30]. Another key challenge for detecting performance bugs in DBMSs is defining a test oracle that specifies the correct behavior (i.e., response time) of a performant DBMS for a given SQL query. There are two lines of research that attempt to address this challenge, both focusing on performance regressions. One approach uses a pre-determined performance baseline as the oracle [35, 45, 46] and reports a performance bug if there is a significant deviation. While potentially effective in detecting some performance bugs, this approach is human-intensive and error prone, as it is challenging to construct an accurate performance baseline and to account for variability in DBMS performance (to reduce false positives) [28]. Furthermore, this approach relies on a fixed, limited set of queries from standard benchmarks that only cover a subset of the SQL input domain [42].
The second approach leverages differential testing to discover performance regressions [26] by using an oracle to compare the execution time of the same query on two versions of the DBMS. While this technique does not require a developer-provided, pre-determined baseline, it is only able to detect regressions, as (1) it requires two versions of the DBMS, with and without the performance bugs, and (2) focuses on structurally simple queries specially tailored for uncovering regressions.
Our Approach. To address the limitations of existing techniques, we present AMOeba, a new approach for discovering performance bugs in DBMSs. AMOeba addresses the challenges discussed above along three dimensions. First, it constructs a performance oracle by comparing the execution time of semantically equivalent queries (i.e., queries that always return the same result) [19, 48]. When
the target DBMS exhibits a significant difference in execution time on a pair of semantically equivalent queries, this may indicate the presence of a performance bug. Second, it constructs queries tailored to the discovery of performance bugs, by supporting complex structures and computationally expensive SQL operators. Further, because of the large space of SQL queries that AMOeba can explore, we introduce a feedback mechanism that lets it focus on the subset of the query space that is more likely to uncover performance bugs.
Third, it introduces two types of semantic preserving query mutation rules that are also tailored to performance bugs detection: (1) structural mutations, which transform an input query using a set of query rewrite rules derived from the query optimization literature [23], and (2) expression mutations, which modify expressions within an input query without changing their semantics.
To evaluate our technique, we implemented it and applied to two widely-used DBMSs: CockroachDB and PostgreSQL. Our results are promising, in that AMOeba found 39 potential performance bugs, among which developers have confirmed 6 bugs and fixed 5 bugs. We also compared AMOeba against two other sources of equivalent queries that could be used for detecting performance bugs: a manually-written test suite in a widely-used query optimization framework, and the Ternary Logic Partitioning (TLP) approach [38].
Our results show that the equivalent queries generated by AMOeba are more likely to detect performance bugs.
**Contributions.** This paper makes the following contributions:
- A performance bug detection technique with three new aspects:
- The use of query equivalence to generate performance oracles.
- Two types of query mutations that preserve the semantics of queries: structural mutations and expression mutations.
- A feedback mechanism that improves the effectiveness and efficiency of the approach.
- An implementation of the technique that is publicly available [12].
- An evaluation of the technique that shows that it can detect real and relevant previously-unknown performance bugs in two widely-used DBMSs.
2 Motivating Example
Figure 1 shows a motivating example that we use to illustrate how semantically equivalent queries can be leveraged for detecting performance bugs in DBMSs and show the significant impact performance bugs can have on the end-user experience.
The example consists of a pair of equivalent queries, Q1 (Figure 1a) and Q2 (Figure 1b), that our technique actually generated based on the SCOTT schema [11] and that detected a real performance bug. We also show, in Figure 2, the logical query plans for Q1 and Q2 (i.e., the sequence of logical operations performed when executing the two queries). Although Q1 and Q2 are equivalent, Q1 runs 1,444x slower than Q2 on the same database in CockroachDB [6] (v20.2.0-alpha).
The difference in performance in the two cases is caused by how the emp table, which contains 10 million rows, is processed. For Q1, the DBMS ignores that the maximum number of result tuples is 13 and processes the entire emp table anyway. For Q2, conversely, the DBMS considers the LIMIT directive, processes the table by row, and stops after fetching the first qualifying 13 entries. As a result, while Q1 takes 13 seconds to execute, Q2 only takes 9 milliseconds.
### Performance Bug Detection in DBMSs
**SELECT** job, depno **FROM** emp **WHERE** job = 'Technical' **GROUP BY** job, depno **LIMIT** 13;
(a) Original query Q1, execution time = 13 s
**SELECT** CAST('Technical' AS VARCHAR(10)) AS "job", depno **FROM** emp **WHERE** "job" = 'Technical' **GROUP BY** job, depno **LIMIT** 13;
(b) Mutated query Q2, execution time = 9 ms
**Figure 1:** Query Rewriting – Example of application of the projection column mutation rule.
The developers have acknowledged that this is a previously-unknown performance bug and have produced a fix for it.
AMOeba can detect this performance bug because it uses the execution time of equivalent queries as performance oracles. Furthermore, by doing so, AMOeba can provide a concrete performance bug report, which allows developers to reproduce and investigate the potential performance bug.
3 Background and Terminology
This section provides some relevant background information and introduces the terminology used in the paper.
**Performance Bugs and Related Concepts.** A performance bug is a bug that affects the time taken by the DBMS to process certain queries. Before reporting it to the DBMS developers, we refer to a performance discrepancy identified by AMOeba as a potential performance bug (PPB), which can be unique or not, depending on whether its root cause differs from that of other bugs discovered by AMOeba (based on manual analysis). After we report a PPB, depending on the feedback we receive from the developers, we classify the PPB according to the following taxonomy:
- **Confirmed:** The developers acknowledge that the issue reported indicates an actual performance bug. Confirmed performance bugs can further be classified as either previously unknown, if the developers do not refer us to a previous/duplicate bug report or an already planned fix, or previously known, otherwise. A confirmed performance bug, whether previously known or not, can also be classified as fixed, if the developers plan to fix the it, already fixed it, or have a fix in progress, or not fixed, otherwise.
- **Backlogged:** If the developers respond to the report and state that they will analyze the PPB at a later time.
- **Unconfirmed:** If the developers do not acknowledge that the issue reported is a performance bug. This can happen for two reasons. Either the developers disagree that two reported queries are equivalent, in which case we refer to the issue as a false positive, or they acknowledge the performance discrepancy but consider it a future/missing optimization, rather than a bug.
- **Unknown:** If the developers ignore the report.
### Logical Query Plans – sequence of logical operations performed when executing the queries in Figure 1.
The developers acknowledge that the issue reported is a performance bug. This can happen for two reasons. Either the developers disagree that two reported queries are equivalent, in which case we refer to the issue as a false positive, or they acknowledge the performance discrepancy but consider it a future/missing optimization, rather than a bug.
We refer to these queries as base queries. We illustrate how this rule transforms its projection columns while (i.e., the predicate that specifies which rows are to be returned). We illustrate how this rule transforms its projection columns while preserving semantic equivalence using query Q1 (Figure 1a) and its logical query plan (Figure 2a). Since the filter clause ($\sigma$) in Q1 selects tuples such that the job attribute is equal to a specific value, the rule replaces the final projection column job with a literal column that takes the same value. Figure 1b and Figure 2b show the transformation result, that is, Q2 and its logical query plan.
4 The AMOEBA Technique
4.1 System Overview
AMOEBA helps developers uncover performance bugs in a DBMS. The key idea is to compare the runtime performance of the DBMS on two semantically equivalent queries, which we would normally expect the DBMS to execute in a similar amount of time. If that is not the case, and the difference in the query execution time exceeds a developer-specified threshold (e.g., 2x), then AMOEBA has found a PPB. Such a performance oracle allows us to detect performance bugs in a single DBMS (i.e., without resorting to comparative analysis against another DBMS).
Figure 3 illustrates the architecture of AMOEBA, which contains three main components: (1) GENERATOR, (2) MUTATOR, and (3) VALIDATOR.
1. GENERATOR leverages a domain-specific fuzzing technique to generate SQL queries from scratch based on a database schema. We refer to these queries as base queries. GENERATOR is tailored to generate queries that are more likely to trigger performance bugs in DBMSs. In particular, it receives feedback from the latter components of AMOEBA to guide the query generation process.
2. MUTATOR takes a base query as input and seeks to generate equivalent queries by applying a set of semantics-preserving query-rewriting rules to the query. We refer to the resulting set of equivalent queries as mutant queries. The output of this component is the base query and a set of equivalent mutant queries.
3. VALIDATOR takes a set of equivalent queries as input and generates a list of performance bug reports. It runs each pair of equivalent queries on the target DBMS and observes whether any pair exhibits a significant difference in runtime performance. If so, it first verifies whether this behavior is reproducible across multiple runs. If it can confirm the discrepancy, it generates a report that consists of: (1) the pair of equivalent queries that exhibit the performance discrepancy, and (2) their query execution plans.
We next present the three components of AMOEBA in detail.
4.2 Query Generator
AMOEBA uses a grammar-aware GENERATOR that randomly constructs, given a database schema, a set of base queries from scratch (i.e., without using any seed query). As shown in Algorithm 1, GENERATOR takes a target database as input, generates base queries as output, and uses two key procedures for generating queries that are more likely to trigger performance bugs: (1) GENERATE_QUERY (§4.2.1) uses a top-down, grammar-aware approach to generate queries that are compatible with the schema of the input database, and (2) UPDATE_PROB_TABLE_WITH_FEEDBACK (§4.2.2) leverages feedback from prior runs of MUTATOR (§4.3) and VALIDATOR (§4.4) to guide the GENERATE_QUERY procedure. AMOEBA relies on this feedback mechanism to improve the probability of generating queries that trigger performance bugs. Next, we provide more details about these two procedures.
4.2.1 Grammar-Aware Query Generation. Researchers have extensively explored techniques for grammar-aware query generation [4, 13, 14, 47]. AMOEBA’s query generation approach differs from prior work in that it is geared towards generating queries that are more likely to trigger performance bugs in DBMS. This part of the approach is based on two main components: (1) a grammar for generating queries with different structures and operators, and (2) a probability table defined with respect to the grammar to guide the query generation process.
GRAMMAR. AMOEBA uses a grammar based on the SQL-92 standard [1]. The grammar is expressed in Backus–Naur Form (BNF), which consists of both terminal and non-terminal symbols. We show a subset of the grammar in Table 1. For instance, the non-terminal symbol table胺 may either be a base table (i.e., table胺) from the target database or a derived table (i.e., table胺joined) resulting from a JOIN operator.
Table 1: SQL Grammar – A subset of the SQL grammar that allows AMOEBA to generate queries with a variety of structures and operators.
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>QUERY_SPEC</td>
<td>SELECT column-expression FROM table-expression [GROUP BY expression] [LIMIT expression]</td>
</tr>
<tr>
<td>TABLE_NAME</td>
<td>table-expression</td>
</tr>
<tr>
<td>JOIN_TYPE</td>
<td>LEFT</td>
</tr>
<tr>
<td>BOOLEAN_EXPR</td>
<td>TRUE</td>
</tr>
</tbody>
</table>
Table 2: Probability Table – Probability values that AMOEBA uses to generate table references and join conditions.
<table>
<thead>
<tr>
<th>Table References</th>
<th>Join Conditions</th>
</tr>
</thead>
<tbody>
<tr>
<td>table_simple</td>
<td>table_joined</td>
</tr>
<tr>
<td>0.5</td>
<td>0.16</td>
</tr>
</tbody>
</table>
PROBABILITY TABLE. As shown in Table 2, AMOEBA maintains a table that contains the probability of each non-terminal and terminal symbol when generating queries following the grammar. This table determines the likelihood of a SQL structure or clause to appear in the generated query. For all symbols that stem from a given non-terminal symbol, the probabilities sum up to one. For instance, Table 2b specifies that there is an equal chance of generating a non-terminal symbol, the JOIN condition, using either a boolean expression (e.g., 11.k = 12.k) or the keyword TRUE.
Next, we present the algorithm of GENERATEQUERY, which is shown in Algorithm 1. First, the algorithm acquires information that it needs to generate queries (line 1). Specifically, it performs the following steps: (1) it randomly samples a small dataset from the target database, so as to be able to generate queries with meaningful predicates and a variety of selectivity. (2) it collects table schemas of the target database (i.e., table names, column names, and column types), which it needs to create valid expressions, such as SQL function calls and column comparisons. Second, the algorithm initializes with default values the probability table it uses for guiding the query generation procedure (line 2). Third, it invokes the BUILDSPECIFICATION function to construct a query specification based on (1) the SQL grammar and (2) the collected meta-data (line 6). Finally, the algorithm translates the specification into a well-formed query for the target DBMS (line 7).
4.2.2 Feedback from Mutator and Validator. We now discuss how GENERATOR updates the probability table based on the feedback from MUTATOR and VALIDATOR (line 9).
FEEDBACK FROM MUTATOR. GENERATOR uses the feedback from MUTATOR to improve the likelihood of generating base queries that can be successfully mutated. Since AMOEBA relies on the generation of semantically equivalent queries, this feedback mechanism indirectly increases the likelihood of discovering PPBs. In procedure UPDATEPROBTABLEWITHMUTATORFEEDBACK (line 13), GENERATOR updates the probability table when a base query that it generates is successfully transformed by MUTATOR (line 12). First, it extracts SQL entities from the base query. For example, since Q1 in our motivating example (Section 2) can be successfully mutated into Q2, GENERATOR extracts the following entities from Q1: table simple, GROUP BY, and LIMIT. The rationale for this part of the approach is that these entities are correlated with successful mutations. Then,
It invokes procedure M
It randomly initializes an ordered list of mutation rules (which it applies the mutation rules (line 2). It attempts to mutate
The base query and generates its logical query plan tree, R\text{ origin}, for the target database as input and returns the base query and its se-
in Algorithm 2.
MUTATOR
We outline the algorithm of MUTATOR
\begin{algorithm}
\begin{algorithmic}[1]
\Procedure{MutateQuery}{base query, meta-data}
\State $R_{\text{origin}} \leftarrow \text{Preprocess}(\text{base query})$;
\State transformed_trees $\leftarrow \text{EmptySet}()$;
\State mutant_queries $\leftarrow \text{EmptySet}()$;
\State for $k$ from 1 to number_of_attempts do
\State \tcp{randomly select a list of mutation rules}
\State mutate_rules $\leftarrow \text{RulesInitialization}()$;
\State $R_{\text{new}} \leftarrow \text{MutateTree}(R_{\text{origin}}, \text{mutate_rules}, \text{meta-data})$;
\State if $R_{\text{new}}$ $\neq$ transformed_trees then
\State new_query $\leftarrow \text{TranslateToQuery}(R_{\text{new}}, \text{dialect})$;
\State $\text{Update}(\text{transformed_trees}, \text{mutant queries})$;
\EndFor
\State return base query, mutant queries;
\EndProcedure
\Procedure{MutateTree}{R_{\text{origin}}, mutate_rules, meta-data}
\State target_expr $\leftarrow R_{\text{origin}}$;
\For{rule $\in$ rewrite_rules}
\State target_expr $\leftarrow \text{ApplyRule}(\text{target_expr}, \text{rule}, \text{meta-data})$;
\EndFor
\If{target_expr $\neq$ R_{\text{origin}}}
\State return target_expr;
\EndIf
\EndProcedure
\end{algorithmic}
\caption{Procedure for mutating SQL queries}
\end{algorithm}
$R_{\text{origin}}$ for a total of number_of_attempts times (line 5). In each iteration, MUTATOR performs the following three steps:
1. It randomly initializes an ordered list of mutation rules (mutate_rules), which it then applies on the $R_{\text{origin}}$ in a sequential manner (line 6). In doing so, MUTATOR increases the likelihood of uncovering different compositional effects of mutation rules on the input query.
2. It invokes procedure MutateTree to transform $R_{\text{origin}}$ using mutate_rules (line 7). Within this procedure, MUTATOR uses the database meta-data to check whether the rule condition for performing the transformation is met (line 15). Procedure MutateTree returns the resulting plan tree, $R_{\text{new}}$, only if $R_{\text{new}}$ is different from $R_{\text{origin}}$ (line 17).
3. After getting $R_{\text{new}}$ from procedure MutateTree, MUTATOR checks whether it is different from trees constructed in prior mutation attempts (line 8). If so, MUTATOR translates $R_{\text{new}}$ into a well-formed SQL query, new query, based on the target DBMS’s dialect and appends it to mutant queries (line 9, line 10). Finally, the algorithm returns the base query and mutant queries as its output.
4.4 Validator
VALIDATOR takes a set of pairs of semantically-equivalent queries as input and generates performance bug reports as output. To do so, it compares the execution time of the queries within each pair. If a pair of equivalent queries consistently exhibit significant difference in their runtime performance, VALIDATOR generates a performance bug report that consists of: (1) the pair of queries, and (2) their execution plans. Before presenting the algorithm of VALIDATOR, we discuss two challenges associated with discovering performance
bugs based on query equivalence and how we address them in the algorithm.
**Equivalent Execution Plans.** A pair of semantically equivalent queries with different syntax (i.e., structural difference or predicate difference) may reduce to the same query execution plan. In this case, the DBMS will execute these queries in the same way and there will not be any difference in runtime performance. Such queries are not useful for discovering performance bugs. Therefore, to improve the computational efficiency of AMOeba, VALIDATOR focuses on equivalent queries that have different execution plans. In particular, before executing a set of equivalent queries and comparing their runtime performances, it first compares their plans and skips this query pair if they have the same plan.
**False Positives.** The execution time of a query may be affected by system-level factors (e.g., caching behavior of concurrent queries) [24]. To avoid false positives due to these factors, before reporting a PBP to the developers, VALIDATOR verifies that the difference is consistently reproducible by re-executing the same query pair multiple times in isolation and in different execution orders.
**VALIDATOR Algorithm.** Algorithm 3 presents the algorithm of VALIDATOR. The algorithm first invokes procedure CHECKPANDIFF to filter out equivalent queries that lead to equivalent execution plans (line 1). VALIDATOR assumes that two queries have the same execution plan if their estimated costs are the same. Specifically, given two equivalent queries, it utilizes the EXPLAIN feature of DBMSs (line 8) [41] to compute the estimated cost of each query in the pair. If the estimated plan costs are the same, VALIDATOR considers the two query plans to be equivalent and skips them (line 9).
After discarding pairs of equivalent queries deemed to have identical execution plans, VALIDATOR runs the remaining pairs on the DBMS and records their execution time (line 2). Then, within the resulting set of execution times, VALIDATOR checks whether the ratio of the longest to the shortest query execution time exceeds a developer-specified threshold (line 3). If the ratio exceeds this threshold, VALIDATOR invokes procedure CONFIRM to check whether the runtime performance difference is consistently reproducible (line 4). Procedure CONFIRM re-executes these queries on the DBMS for multiple runs in random orders and monitors whether the execution time difference still holds (line 14–19). If so, VALIDATOR automatically generates a performance bug report (line 5).
### 5 Evaluation
To evaluate the effectiveness and generality of AMOeba, we investigated the following questions:
**RQ1.** Can AMOeba find performance bugs in DBMSs? (§5.3)
**RQ2.** How efficient is AMOeba? (§5.4)
**RQ3.** Are all mutation rules created equal with respect to discovering performance bugs? (§5.5)
**RQ4.** How does AMOeba compare against other techniques for finding performance bugs? (§5.6)
**RQ5.** How do the base queries in AMOeba compare against those in Calcite? (§5.7)
### 5.1 Implementation
**QUERY GENERATION.** GENERATOR aims to construct queries that (1) are likely to be syntactically correct and (2) cover a widely-supported subset of SQL constructs [1]. To this end, we implement GENERATOR based on SQLALCHEMY [7], a SQL toolkit and Object Relational Mapper (ORM).
**QUERY MUTATION.** We build MUTATOR on top of the Calcite [5] query optimization framework. Calcite transforms queries by iteratively applying a set of query rewrite rules [23] and works well with SQLALCHEMY, in that they both cover a widely-supported subset of SQL constructs and dialects. As a consequence, most of the semantically-valid queries constructed by GENERATOR can be processed by the MUTATOR. Furthermore, by leveraging the SQLALCHEMY and Calcite frameworks, AMOeba can be easily extended.
**IMPLEMENTATION SCOPE.** AMOeba supports queries with four data types: integer, double, datatime, and string. The queries may use several SQL constructs (e.g., GROUP, DISTINCT, ORDER and UNION) and functions (e.g., AVG and SUM). We present a detailed list of supported SQL constructs in our supplementary materials [12].
**SCHEMA.** AMOeba currently generates queries using the SCOTT schema [11] and runs them on a database based on the same schema. (We made this decision because we seek to compare AMOeba against the manually-crafted Calcite test suite, which is based on this schema.) It is worth noting that the SCOTT schema is comparatively simple: only contains three tables with two primary keys and one foreign key. We configured the size of the database to 30 MB. Because the query execution time is proportional to the size of the database, we wanted to achieve a balance between discovering reproducible bugs (which requires a larger database) and limiting the computational cost of AMOeba (which requires a smaller database).
---
**Algorithm 3: Procedure for detecting performance bugs**
<table>
<thead>
<tr>
<th>Input</th>
<th>Output</th>
</tr>
</thead>
<tbody>
<tr>
<td>base query</td>
<td>gen bug report</td>
</tr>
</tbody>
</table>
1. If CheckPlanDiff(base query, mutant queries) then
// Run equivalent queries if they have different execution plans
2. time_list ← RunQuery(base query, mutant queries);
3. If Max(time_list) > threshold × Min(time_list) then
// Run queries for validation attempt times to confirm the difference is consistent
4. If Confirm(base query, mutant queries) then
5. GenBugReport(base query, mutant queries);
6. Procedure CheckPlanDiff(base query, mutant queries)
7. cost_list ← EstimateCost(base query, mutant queries);
8. If Count(Set(cost_list)) > 1 then
9. return True;
10. Procedure Confirm(base query, mutant queries, validation_attempts)
11. difference_count ← 0;
12. for k ← 1 to validation_attempts do
13. time_list ← RunQuery(base query, mutant queries);
14. If Max(time_list) > threshold × Min(time_list) then
15. difference_count ← 1;
16. If difference_count = validation_attempts then
17. return True;
5.2 Evaluation Setup
Our evaluation focused on two DBMSs: (1) CockroachDB (v20.2.0-alpha), and (2) PostgreSQL (v12.3). We ran all experiments on a server with two Intel(R) Xeon(R) E5649 CPUs (24 processors) and 236 GB RAM. We manually examined the bug reports generated by AMOEBA and reported them to the developers for feedback.
5.3 RQ1 — Performance Bugs Detection
AMOEBA found 25 and 14 PPBs in CockroachDB and PostgreSQL, respectively. Figure 5 summarizes the impact of the discovered performance discrepancies (i.e., the performance gap between equivalent query pairs in the bug report).
**Runtime Impact.** While PPBs found in CockroachDB exhibit a slow-down ranging from 1.9× to 669.1×, those found in PostgreSQL exhibit a slow-down between 1.9× to 555.6× for equivalent queries.
**Developers’ Feedback.** Overall, developers have confirmed 6 bugs and fixed 5 among those that we reported. However, the reaction was different between the developers of PostgreSQL and CockroachDB. Of the 7 PPBs we reported to them, the PostgreSQL developers considered 4 to be future/missing optimizations that they did not plan to support at the moment, and 1 to be a false positive. Of the remaining 2, they did not respond to 1, and the other 1 matched a planned fix, which should at least indicate that it was considered a critical optimization worth addressing. Notably, the PostgreSQL developer who responded to our reports recommended that we list the future/missing optimizations that we identified and reported on their official wiki page, which serves as a collaboration area for PostgreSQL developers and users [10]. At the time of this writing, we are in the process of creating and submitting this page.
The CockroachDB developers reacted more positively to our reports, confirming 6 as performance bugs, assigning 18 reports to their backlog, and classifying 1 report as a false positive. Among the confirmed reports, the CockroachDB developers have fixed 5 (acknowledging 2 that matched planned fixes).
In summary, and according to the terminology we introduced in Section 3, AMOEBA identified 1 unconfirmed, previously known, and fixed PPB, 1 unconfirmed PPBs, 1 of which is a false positive, and 1 unknown PPB for PostgreSQL; for CockroachDB, AMOEBA identified 6 confirmed performance bugs (among which 3 were previously unknown and fixed, 2 were previously known and fixed, and 1 was previously unknown and not fixed), 18 backlogged PPBs, and 1 unconfirmed PB, which is a false positive. We provide details on all the reports submitted and on the developers’ reactions to each of them in our supplementary materials [12].
**Description of Bugs.** We now discuss a subset of the PPBs found by AMOEBA to illustrate the types of bugs it can find.
**Example 1: Expression Simplification.** The pair of equivalent queries below exhibit a 3.2x slow-down in CockroachDB.
```
/* [First query, 75 milliseconds] */
SELECT Max(emp.sal)
FROM dept INNER JOIN emp ON ename NOT LIKE name
WHERE name = 'ACCT';
/* [Second query, 238 milliseconds] */
SELECT Max(emp.sal)
FROM dept INNER JOIN emp ON ename NOT LIKE name
WHERE name = 'ACCT' IS TRUE;
```
The performance difference is caused by the way the filter predicate is processed. For the first query, the DBMS leverages information from the predicate to simplify the JOIN condition, by replacing the variable `name` with the value ‘ACCT’. Because of the simplified JOIN condition, the DBMS only needs a partial scan of the table `emp`. Conversely, with the second query, the DBMS decides that it cannot leverage the predicate information to simplify the JOIN condition and scans the entire table `emp`. After analyzing this bug report, the CockroachDB developers realized that a critical predicate normalization rule was missing in their query optimizer. In particular, if an expression within the predicate guarantees to yield a non-null result (e.g., in our example, the non-nullable column `name` compares with a string value), it is safe to reduce operations on top of it that still take null value into consideration [36]. With the second query, this rule would remove the `IS TRUE` check on top of the comparison clause, which would lead to a more efficient query execution plan. The developers quickly fixed this performance bug due to its broad impact on query performance.
**Example 2: Sub-Queries Returning a Scalar.** The pair of equivalent queries below contain predicates that rely on results of the same subquery.
```
/* [First query, 7 milliseconds] */
SELECT sal FROM emp LEFT OUTER JOIN (SELECT job FROM bonus LIMIT 1) AS t
WHERE t.job IS NOT DISTINCT FROM 'ACCT';
/* [Second query, 211 milliseconds] */
SELECT sal FROM emp WHERE (SELECT job FROM bonus LIMIT 1) IS NOT DISTINCT FROM 'ACCT';
```
However, when the predicate is false (i.e., the first tuple of job is not 'ACCT'), CockroachDB spends 30× more time to execute the second query compared to the first one. The performance difference stems from how the predicate is processed. For the first query, the DBMS realizes that the predicate in the JOIN operator evaluates to false, and thus skips scanning the `emp` table and executing the JOIN operation. For the second query, however, the DBMS ignores the predicate result and scans the entire `emp` table anyway. The developers quickly confirmed that this performance bug lies in the query execution engine. They also acknowledged that the bug belongs to a more important limitation in the query optimizer, in that it cannot re-optimize the main query based on the results of the sub-query. They plan to fix this issue in the near future.
EXAMPLE 3: HANDLING AGGREGATE OPERATORS. The pair of equivalent queries below trigger a 2.9× execution time difference on PostgreSQL, which exposes a suboptimal behavior when handling an unnecessary GROUP BY operator.
/* [First query, 25 milliseconds] */
SELECT emp_pk FROM emp WHERE emp_pk > 100;
/* [Second query, 72 milliseconds] */
SELECT emp_pk FROM emp WHERE emp_pk > 100 GROUP BY emp_pk;
Specifically, both queries request the DBMS to fetch the `emp_pk` column based on the same predicate. The second query also appends a GROUP BY operation before returning the final table. Since `emp_pk` is the primary key of the `emp` table, the `emp_pk` column takes unique values, thereby rendering the GROUP BY operation unnecessary. For the second query, however, the DBMS performs the GROUP BY operation anyway, which leads to a slower execution time than in the case of the first query. While the developers classified this performance discrepancy as a missing optimization, rather than an actual performance bug, they also mentioned that they were in the process of producing a fix for it. As we mentioned above, this seems to indicate that this performance issue was considered relevant enough to be addressed.
DISCUSSION. The empirical results of applying AMOEBA to test CockroachDB and PostgreSQL show that AMOEBA can effectively detect PPBs. Specifically, the semantically equivalent queries generated by AMOEBA do trigger different runtime behaviors in the DBMSs considered, thereby allowing us to use them as a differential performance oracle for finding PPBs. In addition, we found that the format of our bug reports (i.e., a pair of equivalent queries and their execution plans) seems to provide sufficient information for the DBMS developers to reproduce and investigate the PPB.
By using the runtime performance of semantically equivalent queries as a performance oracle, AMOEBA discovered 39 PPBs spread across different components of two DBMSs.
5.4 RQ2 — Efficiency
To answer RQ2, we examined the computational efficiency of AMOEBA, as well as whether AMOEBA’s feedback mechanisms increase the probability of generating queries that discover PPBs.
In this part of the evaluation, we ran AMOEBA on CockroachDB and PostgreSQL in four different configurations and with a timeout of five hours per configuration:
1. `amoeba_none`: both feedback mechanisms disabled,
2. `amoeba_validator`: feedback from VALIDATOR enabled,
3. `amoeba_mutator`: feedback from MUTATOR enabled, and
4. `amoeba_both`: both feedback mechanisms enabled.
Figure 6 presents the results of this study, which consist of the number of total and unique PPBs that AMOEBA discovered in each configuration. We manually mapped each performance bug report to a corresponding unique bug based on the developers’ feedback.
BUG REPORTS, UNIQUE BUGS, AND FALSE POSITIVES. We examine the overall efficiency of AMOEBA by averaging the bug-finding results across four runs. As shown in Figure 6, AMOEBA generated an average of 19 performance bug reports (corresponding to 9 unique PPBs) for CockroachDB and an average of 46 PPBs

Figure 6 presents the results of this study, which consist of the number of total PPBs and unique PPBs that AMOEBA discovers after five hours in CockroachDB and PostgreSQL, respectively.
IMPACT OF FEEDBACK MECHANISMS. To understand the impact of the feedback mechanisms (i.e., feedback from validator and feedback from mutator) on the effectiveness of AMOEBA, we compared the total and unique number of PPBs that AMOEBA discovered in the four different configurations considered. As Figure 6 shows, for both DBMSs, the feedback from validator and mutator increased both the total and unique performance discrepancies that AMOEBA discovered. On CockroachDB, while `amoeba_none` only discovered 10 total performance discrepancies and 6 unique performance discrepancies, the other configurations (i.e., `amoeba_validator`, `amoeba_mutator`, and `amoeba_both`) discovered 25, 22, and 19 total performance discrepancies, respectively, which correspond to 12, 11, and 8 unique performance discrepancies. On PostgreSQL, while `amoeba_none` only discovered 20 total performance discrepancies and 4 unique performance discrepancies, the other configurations discovered 82, 40, and 40 total performance discrepancies, respectively, which correspond to 9, 7, and 8 unique performance discrepancies.
Based on these results, validator-only feedback seems to be more effective than a combination of validator-and-mutator feedback (especially for CockroachDB). One possible explanation is that leveraging both feedback mechanisms tends to reward a larger number of features and result in more complex queries, which can make it challenging for the DBMS to significantly optimize either query in a pair.
RUNTIME PERFORMANCE. We also counted the number of semantically equivalent queries that AMOEBA examined across four runs. On average, AMOEBA generated and examined a pair of semantically equivalent queries per 1.5 and 0.8 seconds for CockroachDB and PostgreSQL, respectively. The overall runtime performance of AMOEBA was dominated by the runtime of the tested DBMS. To further improve the runtime performance of AMOEBA, it would be possible to deploy it on multiple servers to parallelize its execution.
AMOEBA detects a large number of PPBs in both CockroachDB and PostgreSQL within the given time limit of 5 hours. On both DBMSs, feedback from validator and mutator both had a significant impact on the number of PPBs discovered.
These subsets include both structure and expression mutation rules. The impact on query performance is shown using a box plot and a scatter plot. To answer RQ3, we performed an in-depth analysis of mutation rules, that is a rule itself and their importance in discovering PPBs. Rules differ in their impact on query performance, and some rules that do not directly affect query performance enable the application of other rules. Rules that transform expensive operators seem to be effective in generating queries that trigger PPBs, which highlights opportunities for improving future versions of the tested DBMSs.
### 5.5 RQ3 — Effect of Individual Mutation Rules
To answer RQ3, we performed an in-depth analysis of mutation rules used by AMOEBA and their importance in discovering PPBs using the performance bug reports presented in §5.4. Specifically, we examined a dataset of 76 and 182 query pairs that triggered PPBs in CockroachDB and PostgreSQL, respectively. We investigate each mutation rule along two dimensions:
**Impact on query performance:** If a rule generated a query pair that exhibited a significant runtime performance difference, we measured the speed-up (ratio > 1) or slow-down (ratio < 1).
**Frequency of generating bug-revealing query pairs:** We counted the number of times each rule generated a query pair that triggered a PPB, normalized by the total number of bug-revealing pairs.
Figure 7 presents the results for both measures for CockroachDB. The impact on query performance is shown using a box plot and the y-axis on the left. The frequency of bug-revealing query pairs is shown using a bar chart and the y-axis on the right. The results for PostgreSQL are analogous.
As the figure shows, not all mutation rules are equally useful in discovering PPBs. Among 75 mutation rules that AMOEBA uses for generating query pairs, only 39 and 44 rules can generate query pairs that trigger PPBs in CockroachDB and PostgreSQL, respectively. These subsets include both structure and expression mutation rules.
With respect to impact on performance, we found that while some rules always had the same effect on query performance (i.e., either speed-up or slow-down), others exhibited different effects (i.e., both speed-up and slow-down) in different cases. Since AMOEBA seeks to mutate each query by applying a sequence of rules, these patterns result from the compositional effects of mutation rules (§4.3.2): (1) *contention* between mutation rules, that is the performance penalty caused by one rule is damped by the gains from another rule, and vice versa; (2) *enabling effect* of mutation rules, that is a rule itself may not affect the query performance but may transform the query into a form that makes it suitable for mutation by other rules (with an effect on performance).
We further studied our results to identify the characteristics of the mutation rules that generate query pairs that can reveal PPBs, as these may indicate query characteristics that make them challenging for a DBMS to optimize and execute. For both DBMSs, we found that these “effective” rules mostly re-arrange or eliminate expensive operators (UNION, GROUP BY, and JOIN) in a given query. Conversely, other rules that manipulate the projection and sorting operators (SELECT and ORDER BY) are less likely to generate query pairs that trigger PPBs. Improving how the DBMS handles these operators would therefore enhance their robustness.
### 5.6 RQ4 — Comparative Analysis
Query equivalence has been leveraged before in related work [16, 27, 40], albeit not to uncover performance bugs. To answer Q4, we compare AMOEBA to three baseline based on two of these existing approaches: Calcite [19, 48] and TLP [38].
**BASELINE 1: QUERY PAIRS FROM TLP.** TLP [38] constructs equivalent queries to discover *logic bugs* in DBMSs. It is based on the observation that any predicate in SQL evaluates to TRUE, FALSE, or NULL [38]. Accordingly, TLP constructs a mutant query that is equivalent to the base query by (1) dividing the base query into three partition queries, wherein each predicate is constructed based on the value of the overall base query’s predicate and (2) concatenating these partition queries using the UNION operator. Our first baseline consists of 2000 pairs of equivalent queries generated using TLP.
**BASELINE 2: QUERY PAIRS FROM CALCITE.** The Calcite test suite [19, 48] consists of tests manually crafted to ensure the correctness of Calcite’s query transformation rules; each test transforms an input query using a set of transformation rules and examines whether the resulting query is correct. Our second baseline consists of 373 pairs of equivalent queries from the Calcite test suite, in which we use the input query as the base query and the transformed query as the mutant query. This is an appropriate and challenging baseline because (1) the input queries are manually created to cover a wide range of SQL operators, and (2) the mutant queries are generated using the same set of transformation rules as AMOEBA.
**BASELINE 3: QUERY PAIRS FROM CALCITE USING AMOEBA MUTATOR.** Our final baseline consists of 373 pairs of equivalent queries generated by applying AMOEBA’S MUTATOR on the base queries from the Calcite test suite.
**COMPARISON.** Once selected these three baselines, we ran each set of baseline query pairs on our SCOTT-based database and compared the number of PPBs discovered by each baseline to those discovered by AMOEBA. Our results are shown in Table 4. As the figure shows, AMOEBA discovered significantly more PPBs than the baselines in both DBMSs. We next analyze the factors that contribute to the efficacy of AMOEBA.
**Table 4: Comparative Analysis of AMOEBA — Number of PPBs discovered by the set of query pairs in each baseline and by AMOEBA.**
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Cockroach</th>
<th>PostgreSQL</th>
</tr>
</thead>
<tbody>
<tr>
<td>Benchmark 1 (TLP)</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>Benchmark 2 (Calcite)</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>Benchmark 3 (Calcite+AMOEBA)</td>
<td>4 6</td>
<td></td>
</tr>
<tr>
<td>AMOEBA</td>
<td>25</td>
<td>14</td>
</tr>
</tbody>
</table>
Both structural and expression mutation rules generate queries that can reveal PPBs. Rules differ in their impact on query performance, and some rules that do not directly affect query performance enable the application of other rules. Rules that transform expensive operators seem to be effective in generating queries that trigger PPBs, which highlights opportunities for improving future versions of the tested DBMSs.
**Mutation Rules and Algorithm.** TLP uses a different set of mutation rules than AMOEBA for generating equivalent queries, which were not designed to detect performance bugs. The queries mutated by TLP consistently take longer to execute than their corresponding base query, with an average slow-down of $17x$. This happens because the mutated queries force the DBMS to perform additional operations (i.e., fetching the tuples for each partition query and combining the partial results). Given this inherent overhead, TLP is limited in the kinds and number of PPBs it can find.
While Calcite and AMOEBA use the same set of mutation rules, they differ in the way they leverage these rules. Calcite’s tests transform the base queries using a small set of mutation rules applied in a specific order. Conversely, AMOEBA’s automated mutation strategy exploits all available mutation rules and their compositional effects to generate equivalent query pairs, which increases the chances of generating queries that can discover PPBs (§4.3.2). The effect of this different approach can be observed in the second and third rows of Table 4, for PostgreSQL, where AMOEBA’s mutation strategy discovers two more PPBs than the manual mutation in Calcite.
For both DBMSs considered, AMOEBA discovered considerably more PPBs than the baselines based on Calcite and TLP. Two reasons for this better performance are that AMOEBA (1) uses a rule mutation approach tailored to discovering PPBs and (2) performs a broad exploration of the equivalent query space by leveraging all available rules and their compositional effects.
### 5.7 RQ5 — Analysis of Base Queries
As shown in Table 4, the base queries derived from the Calcite test suite allowed for discovering fewer PPBs than those generated by AMOEBA. To understand why, we compared a set of 2000 base queries generated by AMOEBA to the 373 base queries in Calcite.
**Single-Clause Analysis.** We first examined the SQL single-clause and type coverage in the two sets. We found that the base queries for AMOEBA and Calcite cover almost the same set of SQL types and operators (e.g., join operators and common keywords). Since Calcite’s base queries are less expensive than AMOEBA’s, we inferred that targeting high single-clause coverage when generating queries is not sufficient for detecting PPBs. We then investigated the importance of considering combinations of different SQL clauses.
**Two-Clause Combination Analysis.** We examined two-clause combination coverage for both sets of base queries (e.g., existence of base queries with both GROUP BY and LEFT JOIN). To do this, we constructed a co-occurrence matrix using the base query dataset and SQL clauses supported by both AMOEBA and Calcite. For each base query dataset, we counted the frequency of queries with two SQL clauses, normalized the count by the number of total queries, and plotted the result in a heatmap, shown in Figure 8. Additionally, we identified clause-pair combinations within base queries that triggered performance bugs (§5.4), and highlighted those combinations that are only found by AMOEBA using a * marker in the heatmap. We only present the heatmap for CockroachDB, as the one for PostgreSQL is similar. Because AMOEBA-generated base queries lead to the discovery of more performance discrepancies, we believe these highlighted combinations may represent interesting two-clause combinations that are more likely to trigger PPBs.
The results in the heatmap show that, for both DBMSs considered, the base queries generated by AMOEBA explore a significantly larger space of two-clause combinations than Calcite’s base queries. While base queries from Calcite’s test covers 103 and 106 clause pair combinations for CockroachDB and PostgreSQL, base queries from AMOEBA cover 208 and 200 combinations for the two DBMSs.
**Possible Implications for DBMS developers.** Our results also show that the base queries from Calcite missed a significant amount of interesting clause-pair combinations that lead to discover PPBs. Specifically, they missed 58 and 52 clause pair combinations for CockroachDB and PostgreSQL. We summarize the characteristics of these clause pairs, as they may reflect query patterns that are challenging for DBMS to optimize but are neglected by manual testing efforts: (1) filter clauses (i.e., WHERE and HAVING), when combined with expensive operators such as JOIN, GROUP BY, and UNION, can have a significant effect on query execution time; (2) the LIMIT clause can also affect query execution time. Because LIMIT requests a smaller set of results, an optimal plan should either scan a partial table or terminate expensive operations early. Improving how the DBMSs handle these operations and their combinations may enhance the robustness of its performance.
An additional reason why AMOEBA outperforms the baselines considered, in addition to those discussed in §5.7, is that it covers a wider range of clause-pair combinations that may be challenging for the DBMS to optimize.
### 6 Limitations
In this section, we discuss the main limitations of AMOEBA and present possible ways to address and mitigate them in future work.
**Reliability on an Existing Optimization Framework.** Because AMOEBA is based on the widely-used query optimization framework Calcite, it inherits Calcite’s limitations. First, Calcite only focuses on a widely-supported subset of SQL operators and functions, so AMOEBA focuses on the same subset. However, since Calcite is an extensible framework, it is feasible to add support
for additional SQL features. Second, AMOEBA leverages rewrite rules from Calcite. Although these rules are designed to preserve query semantics [23], AMOEBA could generate false positives if for some reason Calcite’s rules failed to satisfy this property. To mitigate this threat, AMOEBA generates a bug report only if the two (supposedly) equivalent queries return the same set of output tables, as it is unlikely that queries that are not equivalent would generate the same results.
**Performance Bugs vs Missing Optimizations.** While AMOEBA can identify relevant cases that developers should consider, it is ultimately the developers’ decision whether to support a given optimization. In fact, as discussed in §5.3, the PostgreSQL developers considered most of the the reported issues missing optimizations that they decided to ignore. In some cases, this happens because finding the optimal execution plan is an NP-hard problem [25], so a DBMS may select a sub-optimal plan to reduce computational cost. In other cases, it is simply a design decision. Nevertheless, we believe it is useful to have an automated tool that can report to developers cases that may need consideration and assists them in pinpointing the root cause of a performance discrepancy.
Related to this point, further analysis of the responses from the PostgreSQL developers made us realize that a possible limitation of this approach is that the automated query generation and mutation may result in queries that look artificial and may therefore alienate the developers rather than compel them to investigate and fix the corresponding issues. To address this problem, in future work, we will investigate ways to simplify the automatically generated query pairs and make them more representative of queries that can be encountered in practice.
Finally, it is also worth noting that a byproduct of this work is that it shows that SQL is not fulfilling one of its key promises—that developers can write queries in any form and leave optimization to the DBMS.
### 7 Related Work
In this section, we present prior work on testing DBMSs with an emphasis on DBMS performance.
**Fuzzing DBMSs.** Given the large state space of possible SQL queries, fuzzing has been applied to find crash bugs and security vulnerabilities in DBMSs [2–4]. Researchers have improved the efficacy of the fuzzing loop by taking the feedback from the tested DBMS into consideration [13, 47]. While AMOEBA is also a fuzzing tool equipped with a feedback mechanism, it differs from prior work in that it focuses on generating semantically equivalent query pairs that trigger different runtime performance.
**Differential and Metamorphic Testing.** To circumvent the oracle problem associated with automated testing, researchers have applied differential and metamorphic testing techniques for discovering logic bugs in DBMSs [18, 31]. RAGS discovers logic bugs by executing the same query on different DBMSs and comparing the results [42]. Waas et al. propose a framework for validating the query optimizer by executing alternative execution plans for the input query and comparing their results [43]. TLP is the state-of-the-art tool for discovering logic bugs in DBMS using metamorphic testing [38]. However, as discussed in §5.6, TLP is not suitable for discovering performance bugs. Unlike this previous work, AMOEBA is a metamorphic testing technique tailored for discovering performance bugs in DBMS.
**Performance Testing.** Researchers have presented techniques for finding performance bugs by executing the DBMS on pre-defined workloads and comparing their behavior against performance baselines [26, 35, 45, 46]. These techniques detect performance regressions caused by DBMS upgrades and configuration changes. AMOEBA differs from these approaches in that it does not require a pre-defined baseline for finding performance bugs. Instead, it leverages the tested DBMS’s runtime behaviors on equivalent queries as a performance oracle.
**Optimizer Testing.** Researchers have proposed techniques for testing the query optimizer’s ability to find the best execution plan [21, 24]. Li et al. propose a benchmark for assessing the efficiency of a query optimizer (i.e., optimization time) [30]. Leis et al. investigate the impact of the components of the query optimizer on runtime performance [29]. These efforts are geared towards quantifying the quality of an optimizer. Another line of research focuses on developing frameworks for testing the correctness of query transformation rules in the query optimizer [20, 43]. This work requires an in-depth knowledge about the tested query optimizer. AMOEBA complements these efforts by taking a black-box approach and facilitates a more extensive testing of optimizers.
### Acknowledgments
This work was partially supported by NSF, under grants CCF-1563991, CCF-0725202, IIS-1850342, and IIS-1908984, DARPA, under contract N66001-21-C-4024, ONR, under contract N00014-18-1-2662, DOE, under contract DE-FOA-0002460, Adobe, the Alibaba Innovative Research Program, Cisco, Facebook, Google, IBM Research, Intel, and Microsoft Research. We thank the developers at CockroachDB and PostgreSQL for their useful feedback on our bug reports.
### 8 Conclusion
We presented AMOEBA, a new approach for detecting performance bugs in DBMSs. The key idea behind AMOEBA is to construct two semantically equivalent queries and then compare the time it takes the DBMS under test to execute the two queries. If the execution time for the two queries is significantly different, that indicates a potential performance bug in the DBMS. In order to boost the effectiveness and efficiency of AMOEBA, we also defined a query generation and two feedback mechanisms that allow it to focus on the subset of the query space that is more likely to uncover performance bugs. To assess our approach, we implemented AMOEBA and evaluated it on two widely-used DBMSs with encouraging results. AMOEBA was able to discover 39 potential performance bugs. Developers already confirmed 6 of these bugs and fixed 5 of them.
In future work, we plan to apply AMOEBA to additional DBMSs and to improve our approach based on our current and future findings. Our current results, for instance, highlight relevant query patterns that DBMS may have difficulties processing efficiently. We can use this information to improve AMOEBA by adding to it more rules that focus on such patterns. We will also investigate debugging techniques that can help DBMS developers investigate the root cause of a performance bug after it has been reported.
References
|
{"Source-Url": "https://www.cc.gatech.edu/grads/x/xliu744/icse22.pdf", "len_cl100k_base": 12736, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 45957, "total-output-tokens": 15573, "length": "2e13", "weborganizer": {"__label__adult": 0.0003323554992675781, "__label__art_design": 0.0003883838653564453, "__label__crime_law": 0.00031447410583496094, "__label__education_jobs": 0.0018329620361328125, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.0001842975616455078, "__label__finance_business": 0.0004334449768066406, "__label__food_dining": 0.0003693103790283203, "__label__games": 0.0008111000061035156, "__label__hardware": 0.0008726119995117188, "__label__health": 0.0005421638488769531, "__label__history": 0.0003333091735839844, "__label__home_hobbies": 0.0001131296157836914, "__label__industrial": 0.00047516822814941406, "__label__literature": 0.00036025047302246094, "__label__politics": 0.0002434253692626953, "__label__religion": 0.0004093647003173828, "__label__science_tech": 0.07891845703125, "__label__social_life": 0.00010186433792114258, "__label__software": 0.01922607421875, "__label__software_dev": 0.892578125, "__label__sports_fitness": 0.00022804737091064453, "__label__transportation": 0.0004627704620361328, "__label__travel": 0.0002161264419555664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64465, 0.0302]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64465, 0.28969]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64465, 0.8784]], "google_gemma-3-12b-it_contains_pii": [[0, 5113, false], [5113, 11582, null], [11582, 16075, null], [16075, 19306, null], [19306, 22836, null], [22836, 28788, null], [28788, 34409, null], [34409, 39982, null], [39982, 46520, null], [46520, 52079, null], [52079, 58685, null], [58685, 64465, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5113, true], [5113, 11582, null], [11582, 16075, null], [16075, 19306, null], [19306, 22836, null], [22836, 28788, null], [28788, 34409, null], [34409, 39982, null], [39982, 46520, null], [46520, 52079, null], [52079, 58685, null], [58685, 64465, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64465, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64465, null]], "pdf_page_numbers": [[0, 5113, 1], [5113, 11582, 2], [11582, 16075, 3], [16075, 19306, 4], [19306, 22836, 5], [22836, 28788, 6], [28788, 34409, 7], [34409, 39982, 8], [39982, 46520, 9], [46520, 52079, 10], [52079, 58685, 11], [58685, 64465, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64465, 0.06109]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c0554f524d97ebce55a3f2f87948945769476910
|
END USER LICENCE AGREEMENT
You should carefully read the following terms and conditions. You will accept them by opening the license agreement folder. You should return the complete software package including the media, the license agreement, the user manuals, and any associated documentation intact to your supplier within seven days of receipt if you do not agree with these terms and conditions. Your supplier will credit the license fee charged to or paid by you.
The media, the license agreement, the user manuals, and any associated documentation as well as any and all derivatives thereof (the "Software") is supplied under license from Uniplex Software, Inc., 715 Sutter St., Folsom, California 95630 ("Uniplex") or a Uniplex distributor, dealer, reseller, or other supplier ("Supplier") upon the following terms which you will be deemed to have accepted upon opening the license agreement folder.
All copyrights and other intellectual property rights in the Software are owned absolutely by Uniplex or authorized licensors to Uniplex. You may not load the Software onto a computer or use the Software in any manner without the express license of Uniplex or your Supplier or on the terms set out below. You are granted a non-exclusive, non-transferable license to use the Software on these conditions in consideration of the license fee:
- To use the Software on the single computer under your control for which the Software was licensed and within the user limitations established by the Uniplex license key accompanying this Agreement.
- To make one copy of the Software (not including the user manuals and associated documentation) solely for security backup purposes, provided that you reproduce all copyright notices, trademarks, legends, and logos on the backup copy and maintain an accurate record of its location.
CONDITIONS OF USE. The Software is copyrighted by Uniplex and authorized licensors to Uniplex. You may not:
- Use the Software or any part of it on a computer of a type or for an additional number of users other than that for which the Software was licensed and was granted under this Agreement.
- Make copies of the Software except one copy for security back-up purposes in accordance with this Agreement.
- Make copies of the Software user manuals or any associated documentation.
- Loan, rent, assign, lease, sublicense, transfer, or otherwise provide, electronically or otherwise, the Software or any copy or part of it to anyone else.
- Remove any copyright notice, trademark, legend, logo, or product identification from the Software or the backup copy.
- Reverse engineer, disassemble, reverse translate, or in any way attempt to derive any source code except as permitted by a law made pursuant to the European Council Directive on the Legal Protection of Computer Programs and then only if indispensable to achieve the interoperability of an independently-created program and only after first contacting Uniplex and being advised that the required information is not available.
TERM. This Agreement is effective when you open the license agreement folder which contains the key number and the activation information intact to your supplier within seven days of receipt if you do not agree with these terms and conditions. Your supplier will credit the license fee charged to or paid by you.
LIMITED 90 DAY WARRANTY. Your Supplier will replace any defective media free of charge for a period of 90 days from the date on which you receive the Software. You must notify the Supplier of any material physical defect in the media on which the Software is recorded as soon as you discover the defect. This replacement media will only be provided if you have returned the license activation form and if you return the defective media post-paid to your Supplier stating your name and address and enclosing proof of your license such as an invoice copy. This is your sole remedy in the event of a media defect. This warranty shall not apply in the event that the Software media is lost or stolen or has been damaged by accident, misuse, neglect, or unauthorized use or modification.
LIABILITY. Uniplex, authorized licensors to Uniplex, and your Supplier make no representations or warranties, whether express or implied (by statute or otherwise), relating to the performance, quality, merchantability, or fitness for a particular purpose of the Software or otherwise and all such representations or warranties are hereby specifically disclaimed and excluded except as expressly provided above for media.
You alone are able to determine whether the Software will meet your requirements. The entire risk as to its performance is with you, and, except to the extent provided in the warranty section above, should the Software prove defective, you alone must assume the entire cost of all necessary servicing, repair, or correction and any incidental or consequential damages. Uniplex, authorized licensors to Uniplex, or your Supplier will in no event be liable for direct, indirect, special, incidental, or consequential damages (including, but not limited to, profits or business) resulting from any defect and/or use of the Software, even if Uniplex or any such entity has been advised of the possibility of such damage, whether due to Uniplex’s or to any such entity’s negligence, breach of contract, misrepresentation, or otherwise.
Notwithstanding the above, if there should arise any liability on the part of Uniplex or any such entity, by reason of the licensing or use of the Software or otherwise, whether due to Uniplex’s or to any such entity’s negligence, breach of contract, misrepresentation, or otherwise, such liability shall be of no force or effect unless in writing and signed by an authorized manager of Uniplex.
You shall indemnify Uniplex, authorized licensors to Uniplex, and your Suppliers against all claims by third parties (other than claims alleging breach by the Software, supplied, of a third party’s copyright, patent, or other intellectual property rights) arising from possession or use of the Software by you or by anyone using it with your consent.
UPDATE POLICY. Uniplex or your Supplier may at their sole discretion advise you of and license your use of Software updates and new releases at the current prices for such Software updates and new releases. You must complete and return the license activation form to Uniplex to be advised of such updates and new releases. Any such updates and new releases will be licensed subject to the terms and conditions of this Agreement or of a new agreement provided by Uniplex or by your Supplier.
GENERAL. This Agreement shall be governed by and interpreted in accordance with the laws, other than choice of laws rules, of the State of California, United States of America.
You acknowledge that you have read this Agreement, agree to be bound by its terms and conditions, and agree that is the complete and exclusive statement of the agreement between you and Uniplex which supersedes any previous proposal or agreement, whether oral or written, relating to the subject matter of this Agreement, by opening the license agreement folder.
Any representations, modifications, or amendments to this Agreement shall be of no force or effect unless in writing and signed by an authorized manager of Uniplex.
Either party’s failure or delay in enforcing any provision hereof will not waive that party’s rights.
The remainder of this Agreement shall remain valid and enforceable according to its terms if any provision of this Agreement is found invalid or unenforceable pursuant to any judicial decree or otherwise.
Uniplex may assign or transfer its rights and obligations under this Agreement without your prior consent. You may not transfer your rights under this Agreement to another party without prior consent in writing and signed by an authorized manager of Uniplex.
The Informix products contained in this Uniplex product are licensed for use only with the Uniplex product.
U.S. Government Restricted Rights Notice
Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software Clause at DFARS 225.7227-7013. Uniplex Software, Inc., 715 Sutter St., Folsom, California 95630.
Copyright Notices
Copyright © 1981-2000 Uniplex Software, Inc. Unpublished. All rights reserved. Software provided pursuant to license. Use, copy, and disclosure restricted by license agreement.
IXI Deskterm copyright © 1988-1993 The Santa Cruz Operation, Inc. Word for Word copyright © 1986-1998 Inso Corporation. All rights reserved. Multilingual spelling verification and correction program and dictionaries copyright © 1984-1997 Soft-Art, Inc. All rights reserved. Portions derived from the mimelite library written by Gisle Hannmyr (gisle@oslonett.no) and used with permission. Portion copyright © 1981-1993 Informix Software, Inc.
Restricted Rights Legend
Use, duplication, or disclosure by the U.S. Government or other government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the rights in Technical Data and Computer Software clause at DFARS 252.227-7013. Uniplex Software, Inc., 715 Sutter Street, Folsom, California 95630. Computer software and related documentation shall not be delivered to any branch, office, department, agency, or other component of the U.S. Government unless accompanied by this Restricted Rights Legend or alternatively, unless licensed expressly to the U.S. Government pursuant to FAR 52.227-19, unpublished—rights reserved under U.S. copyright laws.
Notice
The information in this document is subject to change without notice. Uniplex Software, Inc. makes no warranty of any kind in regard to the contents of this document, including, but not limited to, any implied warranties of merchantability or fitness for a particular purpose. Uniplex Software, Inc. shall not be liable for errors in this document or for incidental or consequential damages in connection with the furnishing, performance, or use of it.
**Use This Information When Reordering**
Software : Uniplex Business Software V9.10
Language Version : American/British English
Operating System : Unix
Product Name : Spreadsheet Converter Guide
Document Revision : 1.1 (mxw)
Printing Date : 01 Nov 2000
**Additional Information**
This file is supplied in Adobe PDF format on the CD-ROM distribution of Uniplex Business Software Version 9.10 in the /DOC directory. It is also available from our web site at:
Check the Uniplex web site for updates to Uniplex documentation and software.
http://www.uniplex.com/ubs/download.html
Please e-mail us if you have any comments or corrections regarding Uniplex documentation:
documentation@uniplex.com
stating the Product Code and the Document Revision shown above.
This document was produced using Uniplex Business Software Version 9.10, printed to a PostScript file using the 'docscript' printer definition and then converted to PDF via Adobe Distiller v4.0.
**Licensing Notice**
An end-user license and unique license key must accompany each copy of Uniplex software. The Uniplex software you are using may be pirated if you have not received an end-user license and an official Uniplex license key package. Uniplex Software will prosecute any company or individual found to be improperly using Uniplex software.
Spreadsheet Converter Guide
Table of Contents
Introduction ........................................................................................................... 1
Conversion Principles ...................................................................................... 2
File Formats .................................................................................................... 3
Supported Formats .......................................................................................... 3
Input Format Recognition ................................................................................. 3
Changes in Source or Target Formats ............................................................ 4
What Gets Converted .......................................................................................... 5
Worksheet Extent ............................................................................................. 5
Column Widths ................................................................................................. 5
Named Ranges .................................................................................................. 5
Row/Col Titles ................................................................................................ 6
Protection .......................................................................................................... 6
Formats ............................................................................................................. 6
Values ................................................................................................................. 8
Text ................................................................................................................... 9
Formulae ............................................................................................................. 10
Mathematical Operators .................................................................................. 10
Logical Operators ............................................................................................ 10
Stats Functions ................................................................................................. 10
If Function ....................................................................................................... 10
Financial Functions ........................................................................................ 11
Error Handling Functions ............................................................................... 12
Mathematical Functions .................................................................................. 12
Trigonometrical Functions .............................................................................. 12
String Functions ............................................................................................... 13
Date Functions ................................................................................................ 13
Logical Functions ............................................................................................. 13
Reference Functions ......................................................................................... 14
External Functions ........................................................................................... 15
Unsupported Functions ..................................................................................... 15
# Table of Contents
**What Doesn’t Get Converted**
- Unsupported Functions .......................................................... 16
- Headers ............................................................................... 17
- Links ................................................................................. 17
- Database ........................................................................... 17
- Graphs ............................................................................... 17
- Printing ............................................................................. 17
- Macros .............................................................................. 18
Introduction
This document provides details of the conversion performed by the Uniplex Spreadsheet Converter. This is a one way converter for Uniplex ucalc worksheets to Lotus 123 .WK3 or .WK4 format. The details of command line options are given elsewhere (in the Technical Guide).
Conversion Principles
The objective of the converter is to minimise the effort required to migrate Uniplex worksheets to other spreadsheet packages. The bulk of the effort in converting a worksheet is to related to conversion of the data, formulae and formatting. Global settings like cursor position or window layout are one off jobs and require very much less effort. Accordingly less emphasis is placed on converting these kind of features.
Particular care is taken when converting formulae to preserve mathematical integrity. For example the @if function may be modified during conversion in order to preserve the correct logical test results. Where Uniplex specific functions cannot be converted a warning message is logged and @ERR result appears in the converted worksheet.
Some features do not map well from Uniplex to Lotus 123, for example graphs or printing details. In these cases no conversion is done. Since the user would almost certainly need to modify the resulting worksheet it makes little difference if they start from default settings or inappropriately converted settings.
There are many advanced features in Lotus 123 and other PC spreadsheets that cannot be converted to Uniplex worksheets. Only simple worksheets can be imported successfully and Uniplex already supplies a converter to do this job. This converter does not convert from Lotus 123 to Uniplex ucalc.
File Formats
The conversion is one way, from Uniplex spreadsheet save format to Lotus 123 worksheet format.
Supported Formats
Input formats: Uniplex V7, V8 binary save format and PSF.
Output formats: Lotus 123 .WK3 or .WK4.
Notes:
1. Some clients have successfully converted Uniplex V6 binary save format but this has not been fully tested and is not officially supported.
2. There is an assumption for binary save format that native C int is 32 bits - byte ordering and structure alignments are handled automatically. Early Uniplex DOS versions have 16 bit integers - these files will not convert and generate a corrupt input file error code.
3. The .WK3 format is supported by most popular PC spreadsheet packages and provides a conversion path from Uniplex to Excel, Quatro Pro or SuperCalc.
Input Format Recognition
The input format is automatically recognised and conversion set for binary or PSF format.
Changes in Source or Target Formats
The conversion of binary save format relies on exact details of the binary file layout. Any modification to the binary save format, except the introduction of new functions, will require a modification to the converter.
Addition of new functions requires modification to the converter if they are to be converted correctly. Without modification the function is logged as unsupported and converts to @unknown producing in an ERR result requiring a manual correction.
PSF format is more adaptable - the addition of completely new record types is not a problem, they are simply ignored by the converter. Modification of existing record formats will require changes to the converter, however since this would render PSF non backward compatible it should not occur.
Lotus 123 .WK3 and .WK4 formats are industry standard and will not be changed. Should Lotus 123 be upgraded and a .WK5 format produced it is certain that the new product would be able to load .WK3 or .WK4 worksheets. In this case there would be little point in upgrading the converter.
What Gets Converted
This section details what information gets successfully converted. During conversion a log file is produced containing any errors and warnings about unconverted information.
Worksheet Extent
The extent of cells in the worksheet is converted as part of the header information. Uniplex can support more columns or rows than are permitted in a Lotus 123 worksheet. In this case an error is generated and the worksheet will not convert until it has been modified to fit into 256 columns.
Since Uniplex worksheets are only a single sheet all information is placed in sheet A of the .WK3 or .WK4 file.
Column Widths
Default and individual column widths are converted. The Uniplex spreadsheet has an inter column spacing of two characters, this is not present in Lotus 123. To compensate the column widths are increased by 2 during conversion.
Zero width columns are converted to hidden columns and have the default column width when revealed.
Named Ranges
Uniplex refers to cell range naming as range lables. These are converted to Lotus 123 named ranges. Some information is lost in the conversion: named ranges have a maximum length of 16 characters and so longer labels are truncated and a warning is logged. Uniplex allows labels to refer to absolute or relative addresses, in Lotus 123 a named range is always relative and may be designated absolute when used. After conversion all ranges are relative.
Uniplex uses labels for input and display only. The actual range reference is stored in a formula and converted to a label for display. Lotus 123 stores the named range reference in the formula, not the range it refers to. During
Spreadsheet Converter Functional Specification
formula conversion named ranges are substituted to explicit ranges where appropriate.
**Row/Col Titles**
Uniplex spreadsheet supports an off sheet set of row and column titles. These are awkward to use and many worksheets do not contain any. However if they are present it is useful to convert them since there could be a lot of work involved in re-entering them afterwards.
Optionally row and column titles can be converted into text in column A and row 1 respectively. This causes a displacement of the whole worksheet by one row and one column. During conversion all ranges and range references are adjusted to allow for this displacement.
By default this conversion does not occur since in many cases it would not be considered good practice to modify the cell locations and would cause problems for linked worksheets.
**Protection**
Uniplex refers to cell protection as locking. By default cells are created unlocked and the global protection flag is set false. In Lotus 123 cells are created locked and the global protection flag is false.
By default the conversion process protects all cells, as a consequence explicitly unprotected cells get protected. There is an option to preserve protection exactly as defined in the source worksheet.
**Formats**
Global and cell formats including blank, or empty formatted cells, are converted. However this conversion is not perfect and is done following the rules given here.
**Decimal places**: both Uniplex and Lotus 123 permit decimal format of 0-15 decimal places. Explicit setting of decimal places is converted without problems. In addition Uniplex displays unspecified decimal format as up to 6 decimal places with trailing zeros stripped - Lotus 123 shows these as...
whole numbers. So if the value is not a whole number then a format 2 decimal places is specified.
**Scientific:** Both Uniplex and Lotus 123 support scientific or exponential format and conversion is done without problems.
**Comma:** The use of a comma in numbers - for example 1,000.00 - is supported by both Uniplex and Lotus 123. This converts without problems. Details of numeric display are configured in Lotus 123 by use of the Windows International settings.
**Alignment:** Lotus 123 does not support left, right or centre justification via formats. Text cells only may have alignments are this is specified by the prefix character ‘*’ or ‘ˆ’. During conversion text cells have the appropriate alignment prefix added. Any alignment of numeric cells is lost.
**Currency:** Uniplex uses both hard coded format flags and user defined formats for currency symbols. By contrast Lotus 123 has a single currency format which is determined by setup information. By default the Pound format bit or user format 6 (Sterling) is converted to the Lotus 123 currency format. The currency symbol displayed by Lotus 123 is configured by Windows International settings.
This can be changed by a run time option to select Dollars or other user defined currency formats. Any other Uniplex currency formats are not converted.
**Dates:** Uniplex date formats are configurable, the conversion assumes default definitions are in use and converts as follows:
<table>
<thead>
<tr>
<th>datefmt</th>
<th>Conversion</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>day-month-year</td>
</tr>
<tr>
<td>9</td>
<td>day-month</td>
</tr>
<tr>
<td>10</td>
<td>month-year</td>
</tr>
<tr>
<td>11</td>
<td>International long</td>
</tr>
<tr>
<td>12</td>
<td>International short</td>
</tr>
<tr>
<td>other</td>
<td>International long</td>
</tr>
</tbody>
</table>
**Effects:** Uniplex spreadsheet allows a restricted range of Uniplex effects to be assigned to cells - by default these are AHCDEI converted as follows:
- **A** bold
- **H** large & bold
- **C** underline
- **D** underline
- **E** underline & bold
- **I** italic
Notes: this information is not stored in the .WK3 but in a .FM3 file which is also produced during conversion. When loading a worksheet the .FM3 file must be in the same directory as the .WK3 - otherwise the effects information will be lost.
**Percent:** Both Uniplex and 123 support a percentage format and conversion is done without problems.
**User formats:** Uniplex has user definable formats which are mostly used for currency definitions. In practice these are very rarely changed, and if they are changed then it is probably to something that cannot be converted. The conversion assumes the default specification and converts 3 to highlight negative numbers and 5 to hidden cells.
**Values**
Lotus 123 uses floating point numbers with a range greater than or equal to 64 bit double precision C floating point numbers. So there are no range problems during conversion.
All numeric values are converted unmodified except when formatted as a date. Lotus 123 date values are 1 greater than Uniplex. For example:
- Uniplex 1-Aug-95 is 34911
- Lotus 123 1-Aug-95 is 34912
The difference results from Uniplex using 28 days for February 1900, whilst Lotus uses 29 days.
To compensate for this difference any constant value - that is not the result of a calculation - that has a date format is incremented by 1 during conversion.
Text
Text strings are converted from Uniplex to Lotus 123 without problems. Uniplex has a maximum string length of 256 characters which can be supported by Lotus 123.
Uniplex uses the ISO 8859/1 extended character set. Lotus uses its own internal encoding system LMBCS. During conversion 8 bit characters are converted between the two character sets and all ISO 8859/1 characters are supported.
Uniplex worksheets may contain embedded Uniplex effects in strings. These effects, except for graphics, are stripped during conversion.
Line draw and graphics characters are formed in Uniplex by using a special effect '['. During conversion line draw characters are converted as - | and + characters because standard Windows fonts do not have line draw characters.
Formulae
Conversion of formulae is the main work of the converter.
Mathematical Operators
All maths operators (+ - * / ) are converted except for %. Natural operator precedence and use of brackets is preserved.
Uniplex has a special % operator which is not present in Lotus 123. The percent operator is converted into division by 100 for example:
\[ B44 + B45\% \rightarrow B44 + B45 / 100 \]
Logical Operators
All logical operators (== <> > < >= <=) are converted. In addition Uniplex logical functions NOT(), AND() & OR() are converted to Lotus 123 logical operators #NOT#, #AND# and #OR# respectively.
Stats Functions
Uniplex statistical functions sum(), avg(), min(), max(), count() and stdev() are all converted to Lotus 123 equivalents.
If Function
Uniplex supports 2 and 3 result if functions as follows:
\[
\begin{align*}
@if ( \text{expression}, \text{result if expression > 0}, \text{result if expression <= 0} ) \\
@if ( \text{expression}, \text{result if expression > 0}, \text{result if expression = 0}, \text{result if expression < 0} )
\end{align*}
\]
Lotus 123 has a 2 result if function as follows:
\[
@if ( \text{expression}, \text{result if expression <> 0}, \text{result if expression = 0} )
\]
So conversion to maintain the correct logic is a little tricky as follows:
\[@\text{if} (A1 > B1, 1, 2)\] converts as expected to \[@\text{IF} (A1 > B1, 1, 2)\]
However if the expression is not a logical operation then the conversion adds in the comparison implicit in Uniplex:
\[@\text{if} (A1, 1, 2)\] converts to \[@\text{IF} (A1 > 0, 1, 2)\]
Three result if statements are converted as follows:
\[@\text{if} (A1, 1, 2, 3)\] converts to \[@\text{IF} (A1 > 0, 1, @\text{if} (A1 = 0, 2, 3))\]
**Financial Functions**
Where possible Uniplex financial functions are converted to Lotus 123 equivalents - in some cases it is necessary to substitute the equivalent formulae since no equivalent function exists:
\[@\text{rate} (A, B, C)\] converts to \[@\text{RATE} (B, A, C)\] - note args A & B swapped
\[@\text{fv} (A, B, C)\] converts to \[@\text{FV}(A, B, C)\]
\[@\text{pv} (A, B, C)\] converts to \[@\text{PV}(A, B, C)\]
\[@\text{npv} (A, \text{range})\] converts to \[@\text{NPV}(A, \text{range})\]
\[@\text{irr} (A, \text{range})\] converts to \[@\text{IRR}(A, \text{range})\]
\[@\text{pmt} (A, B, C)\] converts to \[@\text{PMT}(A, B, C)\]
\[@\text{spv} (A, B, C)\] converts to \(A / (1 + B)^C\)
\[@\text{sfv} (A, B, C)\] converts to \(A * (1 + B)^C\)
\[@\text{sink} (A, B, C)\] converts to \(A * B / ((1 + B)^C - 1)\)
\[@\text{period} (A, B, C)\] converts to \(\log_{10}(B / A) / \log_{10}(1 + C)\)
Optional multiple arg forms of @npv and @irr cannot be converted:
\[@\text{npv} (A, B, C, D, \ldots)\] warning \[@\text{npv}(A, B, C, D, \ldots)\] - result ERR ...
\[@\text{irr} (A, B, C, D, \ldots)\] warning \[@\text{irr}(A, B, C, D, \ldots)\] - result ERR ...
Manual version: 9.10
Document revision: V1.1
Error Handling Functions
Error handling functions @err, @na, @iserr() and @isna() convert to Lotus 123 equivalents @ERR, @NA, @ISERR() and @ISNA() respectively.
Mathematical Functions
All the simple maths functions convert except for @div which is replaced by an equivalent formula. The following list summaries the mathematical function conversions:
<table>
<thead>
<tr>
<th>Function</th>
<th>Conversion</th>
</tr>
</thead>
<tbody>
<tr>
<td>@int (A)</td>
<td></td>
</tr>
<tr>
<td>@abs (A)</td>
<td></td>
</tr>
<tr>
<td>@div (A, B)</td>
<td></td>
</tr>
<tr>
<td>@mod (A, B)</td>
<td></td>
</tr>
<tr>
<td>@root (A)</td>
<td></td>
</tr>
<tr>
<td>@exp (A)</td>
<td></td>
</tr>
<tr>
<td>@log (A)</td>
<td></td>
</tr>
<tr>
<td>@log10 (A)</td>
<td></td>
</tr>
</tbody>
</table>
Trignometrical Functions
Lotus 123 does not support all Uniplex trigonometrical functions so some are replaced by equivalent formula:
<table>
<thead>
<tr>
<th>Function</th>
<th>Conversion</th>
</tr>
</thead>
<tbody>
<tr>
<td>@PI</td>
<td>@PI</td>
</tr>
<tr>
<td>@deg (A)</td>
<td>A * 180 / @PI</td>
</tr>
<tr>
<td>@rad (A)</td>
<td>A * @PI / 180</td>
</tr>
<tr>
<td>@sin (A)</td>
<td>@SIN (A)</td>
</tr>
<tr>
<td>@cos (A)</td>
<td>@COS (A)</td>
</tr>
<tr>
<td>@tan (A)</td>
<td>@TAN (A)</td>
</tr>
<tr>
<td>@asin (A)</td>
<td>@ASIN (A)</td>
</tr>
<tr>
<td>@acos (A)</td>
<td>@ACOS (A)</td>
</tr>
<tr>
<td>@atan (A)</td>
<td>@ATAN (A)</td>
</tr>
<tr>
<td>@atan2 (A, B)</td>
<td>@ATAN2 (A, B)</td>
</tr>
</tbody>
</table>
String Functions
All Uniplex string functions can be converted although the Lotus 123 equivalents do not usually share the same function name:
- `@fix ( A , B )` converts to `@STRING ( A , B )`
- `@str ( A )` converts to `@STRING ( A , 2 )`
- `@cmp ( A , B )` converts to `@EXACT ( A , B )`
- `@rpt ( A , B )` converts to `@REPEAT ( A , @SUM ( B ) )`
- `@lit ( A )` converts to `@CELL ( "address", A )`
- `@len ( A )` converts to `@LENGTH ( A )`
- `@mid ( A , B , C )` converts to `@MID ( A , B - 1 , C )`
- `@val ( A )` converts to `@VALUE ( A )`
Date Functions
All simple date functions `@TODAY`, `@day`, `@month`, `@year` and `@date` convert to Lotus 123 functions of the same name but `@day_mon` and `@date_math` have no equivalent functions - see unsupported functions.
Logical Functions
Except for the simplest logical functions `@TRUE` and `@FALSE` which convert unchanged, all other logical functions are more complex to convert. Uniplex `@AND` `@OR` and `@NOT` functions are converted to operators in Lotus 123 as shown below:
- `@AND ( A , B )` converts to `A #AND# B`
- `@OR ( A , B )` converts to `A #OR# B`
- `@NOT ( A )` converts to `#NOT# A`
- `@empty ( A )` converts to `@CELL ( "type", A ) = "b"
- `@defcell ( A )` converts to `@CELL ( "type", A ) = "v"
- `@datacell ( A )` converts to `@ISNUMBER ( A )`
- `@textcell ( A )` converts to `@ISSTRING ( A )`
Reference Functions
Reference functions @choose and @index convert but there is no general support for @lookup in Lotus 123. Some @lookup functions will convert to @HLOOKUP or @VLOOKUP but it depends on the ranges used for the lookup.
@ROW converts @CELL ("row", FC) to
@COL converts @CELL ("col", FC) to
@choose (A, B, C, ...) converts @CHOOSE (A, B, C, ...) to
@index (A, B, C) converts @INDEX (A..IV8912, C, B) to
@lookup (A, R1, R2) converts @HLOOKUP (A, R, O) to
or @lookup (A, R1, R2) converts @VLOOKUP (A, R, O) to
To convert ranges R1 and R2 must have specific properties. For conversion to @VLOOKUP Uniplex @lookup R1 must be within a single column and R2 must be a matching range in a column to the right. So for example:
@lookup("Yes", A5..A24, H5..H24) converts to @VLOOKUP("Yes", A5..H24, 7)
but @lookup("Yes", H5..H24, A5..A24) will not convert since range2 is left of range1
and @lookup("Yes", A3..A10, F7..F14) will not convert since range2 is lower than range1
For conversion to @HLOOKUP ranges R1 and R2 must be in matching rows of cells where R1 is above R2.
Often @lookup can be made to convert by rearranging the rows and columns used in the lookup tables.
External Functions
Some simple off sheet links will convert but in general it will be necessary to replace links that retrieve ranges of cells with Lotus 123 macros that perform equivalent functions. Other external functions and other links, such as database access, are not converted.
An off sheet reference to a single cell will convert, but a range will not since Uniplex and Lotus 123 use of off sheet ranges is completely different.
@link ("get A1 from other.ss") converts to <<other.WK3>>A:A1..A:A1
@link ("get r1c1 from other.ss") converts to <<other.WK3>>A:A1..A:A1
@link ("get name from other.ss") converts to <<other.WK3>>A:B7..A:B7
where name is a label for cell B7
Unsupported Functions
These are covered in the next section on "What doesn't get converted".
What Doesn’t Get Converted
This section lists the functions and features that do not get converted and explains why they are not converted. Also see the first section on “conversion principles” which explains the assumptions behind what is and what is not worthwhile converting. Sometimes it is better to do no conversion at all than to convert in an unreliable manner, especially when it may effect the calculations in the resulting worksheet.
Unsupported Functions
Some functions have no equivalent on Lotus 123. When there are encountered during a conversion they are converted to display the original function but generate an ERR value. A warning is placed in the conversion log file. The following functions will not convert:
- @pipe ("SQL statement")
- @link ("paste db SQL statement")
- @link ("get range from external.ss")
- @sh ("command")
- @rsh ("command")
- @irr (A, B, C, D, ...)
- @npv (A, B, C, D, ...)
Note: an alternative form of these functions @irr (A, range) and @npv (A, range) convert OK.
- @lookup (A, R1, R2)
- @where (range, X)
- @eval (X)
What Doesn’t Get Converted
@day_mon (A) no equivalent function
@date_math (A) no equivalent function
Headers
Uniplex supports two lines of worksheet header - there is no equivalent of this in Lotus 123 and these are ignored during conversion.
Links
As explained earlier only links to a single cell will convert. Off sheet range references have semantically different meaning and cannot be translated. This is because Lotus 123 treats an off sheet reference range as if it is a range reference. For example @SUM(<other.wk3>>A:A1..A:A4) returns the sum of A1..A4 in other.wk3. Whereas Uniplex @link("get A1..A4 from other.ss") retrieves range of cells from other.ss and places the values into an equivalent range in the current worksheet.
Database
Uniplex database interface uses SQL to retrieve information from Uniplex DataLink databases. Lotus 123 uses DataLens and ODBC technology to access data. There is no simple way to convert between the two methods and it is likely that a worksheet will need to be redesigned to some extent to handle the Windows database access methods.
Graphs
Uniplex graphic facilities are very limited and do not operate in the same manner as Lotus 123. There is no simple way to translate between the two and graphs need to be redrawn manually in Lotus 123. Fortunately Uniplex graphics are rarely used.
Printing
Uniplex print setup is limited and operates in a different manner to Lotus 123. Also Unix and Windows printing is very different. Beta testing conversion of print setup resulted in inappropriate settings that had to be manually
reset. As a result any attempt to convert print setup was removed since it was easier to manually set print information from default values than to correct inappropriate translations.
**Macros**
Uniplex macro language is not used extensively. Conversion is very complex and unlikely to be 100% reliable. Even detecting which cells contain macros is difficult since a macro is just text in a cell. No attempt is made to translate macros since the translation would cause as many problems as it solved.
|
{"Source-Url": "http://www.uniplex.com/doco/ConvertGuideV910-V.pdf", "len_cl100k_base": 8382, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 45345, "total-output-tokens": 9777, "length": "2e13", "weborganizer": {"__label__adult": 0.0003020763397216797, "__label__art_design": 0.0004916191101074219, "__label__crime_law": 0.0009565353393554688, "__label__education_jobs": 0.0012960433959960938, "__label__entertainment": 0.0001475811004638672, "__label__fashion_beauty": 0.00010514259338378906, "__label__finance_business": 0.004322052001953125, "__label__food_dining": 0.000247955322265625, "__label__games": 0.0013103485107421875, "__label__hardware": 0.0011453628540039062, "__label__health": 0.00016379356384277344, "__label__history": 0.00016176700592041016, "__label__home_hobbies": 0.0001285076141357422, "__label__industrial": 0.0004804134368896485, "__label__literature": 0.00024044513702392575, "__label__politics": 0.0002036094665527344, "__label__religion": 0.0003709793090820313, "__label__science_tech": 0.00479888916015625, "__label__social_life": 9.846687316894533e-05, "__label__software": 0.4833984375, "__label__software_dev": 0.498779296875, "__label__sports_fitness": 0.0002310276031494141, "__label__transportation": 0.0002460479736328125, "__label__travel": 0.0001850128173828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38193, 0.01723]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38193, 0.25128]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38193, 0.83085]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 8314, false], [8314, 10401, null], [10401, 11785, null], [11785, 11785, null], [11785, 15312, null], [15312, 16001, null], [16001, 16001, null], [16001, 16001, null], [16001, 16285, null], [16285, 17678, null], [17678, 19102, null], [19102, 19684, null], [19684, 21346, null], [21346, 23128, null], [23128, 24828, null], [24828, 26271, null], [26271, 27195, null], [27195, 28424, null], [28424, 30145, null], [30145, 31699, null], [31699, 33077, null], [33077, 34261, null], [34261, 35036, null], [35036, 36107, null], [36107, 37691, null], [37691, 38193, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 8314, true], [8314, 10401, null], [10401, 11785, null], [11785, 11785, null], [11785, 15312, null], [15312, 16001, null], [16001, 16001, null], [16001, 16001, null], [16001, 16285, null], [16285, 17678, null], [17678, 19102, null], [19102, 19684, null], [19684, 21346, null], [21346, 23128, null], [23128, 24828, null], [24828, 26271, null], [26271, 27195, null], [27195, 28424, null], [28424, 30145, null], [30145, 31699, null], [31699, 33077, null], [33077, 34261, null], [34261, 35036, null], [35036, 36107, null], [36107, 37691, null], [37691, 38193, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38193, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38193, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 8314, 3], [8314, 10401, 4], [10401, 11785, 5], [11785, 11785, 6], [11785, 15312, 7], [15312, 16001, 8], [16001, 16001, 9], [16001, 16001, 10], [16001, 16285, 11], [16285, 17678, 12], [17678, 19102, 13], [19102, 19684, 14], [19684, 21346, 15], [21346, 23128, 16], [23128, 24828, 17], [24828, 26271, 18], [26271, 27195, 19], [27195, 28424, 20], [28424, 30145, 21], [30145, 31699, 22], [31699, 33077, 23], [33077, 34261, 24], [34261, 35036, 25], [35036, 36107, 26], [36107, 37691, 27], [37691, 38193, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38193, 0.09174]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1973765287604412ca504af0a928c8f4d56260df
|
Automated Compliance
Gaia-X Institute position paper
07 December 2022
Executive summary
Compliance related to digital sovereignty is a central concern to Gaia-X. The purpose of this paper is to contribute towards a conceptual and terminological framework for developing automated compliance in the context of Gaia-X. Compliance here refers to conformance of a system to a set of rules and regulations, or to conformance with agreements among parties. Automated compliance, as understood here, refers to technologies (algorithms, software, hardware) which can assist either in achieving compliance in a system, in checking compliance of a system, or in enforcing compliance of a system. Compliance automation is subject to inherent limitations, both for technological reasons and for reasons of jurisprudence and legal fundamental principles. Still, raising the level of compliance automation as far as possible is an essential tool to reach monumental goals of Gaia-X, for reasons of efficiency, scalability, and reliability. Compliance automation technology should provide information and artefacts (for example: facts, logs, proof, certificates, evidence) of legal relevance. In addition to the development of compliance automation technology, contributing towards better understanding of the interface between technological and legal notions of compliance is a central area of concern to automated compliance. There is a need for increased R&D devoted to the area of compliance and its automation, both in order to raise the level of automation and in order to understand possible gaps between, on one hand, legal and regulatory systems, on the other hand, means of achieving and enforcing compliance. The notion of Labels provides an essential instrument for extending compliance from a standard core.
Compliance
The purpose of this paper is to contribute towards a conceptual and terminological framework for developing automated compliance in the context of Gaia-X.
Compliance, as understood in this document, refers either (in a narrower sense) to regulatory compliance, that is, conforming to a rule, such as a specification, policy, standard or law\(^1\), or (in a more general sense) to compliance with agreements between parties, for example, service level agreements between stakeholders in the market. Of special concern to Gaia-X is compliance with respect to regulations and agreements related to digital sovereignty\(^2\), which is at the core of the mission of Gaia-X in the context of the European Data Strategy\(^3\).
Automated compliance as understood here refers to technologies (algorithms, software, hardware) which can assist either in achieving compliance in a system, in checking compliance of a system, or in enforcing compliance of a system. Automated compliance may be regarded as a form of regtech, regulatory technology\(^4\), directed at data sovereignty and the areas of concern to Gaia-X.
Compliance automation involves dealing with some of the most challenging problems in computation, and understanding the design space of compliance automation is complicated, requiring a systematic and scientifically informed approach. As will be explained in more detail in this paper, there are limitations to what can be automated in this area, both for inherent (ultimately, mathematical) reasons and for reasons of law. On one hand, it is a consequence of basic results of computer science that not all properties of programs or systems can be automatically verified. On the other hand, no level of automation of compliance can replace jurisprudence or the human factor essentially involved in the legal dimension of compliance. Still, developing compliance automation as far as possible is an essential tool for implementing compliance in practice.
---
The purpose of this paper is to contribute towards a conceptual and terminological framework for developing automated compliance in the context of Gaia-X.
The need for compliance automation
Regulation with respect to digital sovereignty, including the EU Data Governance Act\(^5\) and the EU Data Act\(^6\), is increasing at rapid pace in response to societal concerns that are central to European values and to Gaia-X. Increasing levels of regulation lead to the need for corresponding procedures for implementing (achieving, checking, enforcing) regulatory compliance. In order to realistically implement compliance, it is increasingly necessary to develop tools to automate compliance implementation, for the benefit of all stakeholders.
Legal and technical notions of compliance
In addition to inherent technical challenges for automating compliance, a further fundamental challenge arises from the necessity to consistently understand the notion of compliance both from a legal perspective and from a technical perspective: Automated compliance is a tool to help achieve, check, or enforce compliance properties of technical systems. These properties ultimately are defined by or follow from legal and regulatory systems. Already at the terminological level, it can be challenging to talk about both aspects at the same time without risk of misunderstanding. For example, the term “procedure” means something different in law and in computer science (although the meanings might be related, which may only increase risk of misunderstanding). This document is written from a mostly technical (computer science) perspective. Further work is necessary to clarify the interface between legal and technical notions of compliance.
Compliance by design, ex-ante, ex-post
Compliance by design refers to modes of construction of systems or components towards achieving compliance. For example, a smart metering system may use only sensor technology that has been approved \textit{a priori} for the purpose. Or, a software system may use cryptographic components that have been certified for certain security and privacy levels. The distinction between \textit{ex-ante} and \textit{ex-post} refers to different modes of regulation\(^7\). For
\(^{5}\) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020PC0767
\(^{6}\) https://ec.europa.eu/commission/presscorner/detail/en/ip_22_1113
example, in the area of regulation of digital markets\textsuperscript{8}, ex-ante regulation may refer to policies adopted to prevent anti-competitive behaviour, whereas ex-post may refer to policies for enforcement or punitive action once anti-competitive behaviour has occurred. In the context of compliance, broadly speaking, \textit{ex-ante compliance} stipulates policy and behaviour necessary to achieve compliance, whereas \textit{ex-post compliance} refers to regulations dealing with cases of non-compliance.
An example: smart metering
A smart meter\textsuperscript{9} is an electronic device that records information about consumption of resources (for example, gas, water, or electric energy). For example, the meter could record voltage levels, current, and power factor, and it could record dates and time intervals of such measurements, which typically happen in near real-time. Smart meters may communicate information to consumers (e.g., for understanding consumption patterns), and to suppliers (e.g., for system monitoring and customer billing). Smart meters enable two-way communication between the consumer and the supplier, in an automated manner provided by the smart meter.
The deployment of smart metering systems is currently receiving renewed attention due to large-scale societal factors and goals, including transition of energy away from fossil resources and towards reduction of CO\textsubscript{2} emissions, digitalisation towards intelligent energy systems, reducing resource (energy) consumption to cope with crises of shortage, reducing economic or political dependence on certain countries exporting energy resources.
From a computing perspective, smart metering systems are data intensive, distributed (and in part cloud-based) multi-stakeholder systems, which are subject to regulation. Smart metering systems illustrate the need to regulate complex, data intensive systems in order to mediate possibly conflicting interests among stakeholders, for example privacy concerns of consumers, (cost-) efficiency of providers, policy goals of public bodies. Thus, in 2012 data protection issues were the subject of regulatory concern within the European Commission in preparation for the roll-out of smart metering systems\textsuperscript{10}, and smart metering systems have been subjected to specific regulatory rules, for example\textsuperscript{11}:
\begin{itemize}
\item Without explicit approval by the consumer, all data-gathering and use is restricted to the bare minimum required for the energy system to work
\end{itemize}
\textsuperscript{11} This example is from The German Ministry of Economic Affairs and Energy: \textit{Federal The German Smart Metering – Subject to Stringent Data Protection and Security Rules}.
The intervals at which the meter is read have been designed to be long enough to prevent any conclusions being drawn about user habits.
- No data will be transmitted unless it has been anonymised, pseudonymised, or aggregated.
- Data will be processed in situ, right on the consumer’s premises.
- Energy data will be passed on to as few parties as possible.
- It will be mandatory for data to be deleted within specified time periods.
- Consumers will be able to monitor and verify all communications and processing steps at all time.
- It will be easy for consumers to enforce their right to object and to data being deleted or corrected.
- Consumers will still be able to choose the tariff that suits them best.
There are many concrete scenarios of interest to citizens that can be brought about by smart metering, for example:
- If you claim a reduction of your invoice because you have not used your washing machine during peak hours, only a smart meter measuring your electricity consumption every half an hour can confirm you are compliant and bring a pre-constituted legal proof of that.
- If you are prepared to be part of electrical erasure you can voluntarily reduce your subscribed power for instance from 9 kva to 3 kva through the smart meter to get a discount, and then you will have to arbitrate between airconditioning and electronic vehicle charging, otherwise your circuit breaker will cut you off.
Notice that the example not only illustrates the need for compliance in order to impose limitations on the use of data obtained by smart metering systems. It may also be used to illustrate the need for compliance in order to enable the use of such data in ways which may be deemed desirable. For example, one might consider using smart metering for incentivising responsible ecological behaviour of citizens, by enabling discounts to customers who contribute to reducing energy consumption. Compliance of the smart metering system according to regulation of such a scenario would be needed, both as a matter of policy and from the standpoint of civic acceptance.
Compliance and automation
*Distinction between semantic and procedural notions of compliance properties of systems.*
Inherent limits of automation. Automated compliance as a “monumental goal”. The interface between technological and legal notions of compliance.
Semantic versus procedural notions of compliance and limits of automation
In the context of computational (in particular, software-based) systems, it can be useful to distinguish between two distinct, but related, notions of compliance properties. In one sense, a compliance property may be understood as a *semantic* property of a system or program.
In computer science, a semantic property pertains to the *behaviour* of the computational system. An example of specifying such a property of a program might be: “This program computes the square root function on natural numbers”. This specification says that the program, when given a natural number \( n \) as input, will produce the square root of \( n \) as output. In this case, the specification refers to all possible input-output behaviours of the system (for all numbers \( n \), the output will be the square root of \( n \)). Notice that there are infinitely many such behaviours, which are comprised by the specification.
Semantic compliance can be understood as the “ground behavioural truth” for compliance, referring to all possible behaviours of the system. Semantic compliance properties may be specified via a set of *semantic rules*, compliance meaning behavioural consistency with the rules.
For example, the rule for the smart metering system
- “No data will be transmitted unless it has been anonymised, pseudonymised, or aggregated”
is a semantic rule requiring all behaviours of the system to have the property that they do not transmit data unless it has been anonymised, pseudonymised, or aggregated.
Semantic properties of systems may be complicated to ascertain, because they may refer to infinitely many behaviours of the system, and it may not be possible to check such properties exhaustively by testing the system (meaning, executing the system on finitely many test cases). In general, semantic properties of programs are *algorithmically undecidable*\(^\text{13}\). Undecidability of a program property means that there cannot (for mathematical reasons) exist an algorithm which always correctly determines whether a
---
\(^{12}\) The word “program” here is used to mean a piece of software, that is, a piece of text written in a programming language which can be executed on a computer. The word “system” as used here typically refers to computational entities composed of many hardware- and software components.
\(^{13}\) See, e.g., the classical text by Martin Davis: *Computability and Unsolvability*. McGraw-Hill 1958. More specifically for the present context, the relevant result of theoretical computer science is Rice’s Theorem which says, essentially, that any non-trivial extensional property of programs is undecidable. A property is trivial if it is the empty set or the universal set. A property is extensional, if it only depends on the input-output behaviour of the program. See e.g. [https://en.wikipedia.org/wiki/Rice%27s_theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem).
given program has the property. For example, the halting property of programs is famously undecidable\textsuperscript{14}: Given any program, to decide whether it ever halts (stops) or not. Some (even some quite simple-to-state) properties of programs therefore cannot be automatically checked with complete precision in total generality, such as for example to determine, given any program, whether it could ever attempt to perform a division by 0. It is a consequence of these basic results of computer science that:
- checking arbitrary semantic compliance properties of systems cannot be fully automated in general
Hence, when we talk about “automation of compliance”, “automated compliance” etc. it must always be understood that automation may be only partial or may only pertain to certain restricted aspects of the system. That being noted, such partial automation may still be extremely useful and economically attractive.
In another sense, compliance may be understood as a \textit{procedural}\textsuperscript{15} property referring to adherence to a set of procedural rules regarding various aspects of a system. Such properties may state that a system is constructed or used according to certain procedures (e.g. procedures for assembly, programming, deployment, or operating and maintenance), or they may characterise the originator of the system (e.g. authenticating the origin of a part of a system), or they may characterise ways in which a system has been inspected or certified (e.g. following certain audit procedures). In the smart metering example, the rules
- “Consumers will be able to monitor and verify all communications and processing steps at all time”
- “It will be easy for consumers to enforce their right to object and to data being deleted or corrected”
can be understood in a procedural way (the smart metering system offers appropriate monitoring services to consumers, and there are procedures in place for consumers to make certain claims).
Certification of procedural compliance may often be considered a matter of making sure that specified design guidelines, engineering guidelines, auditory procedures, or other regulatory rules have been duly followed by the relevant parties (e.g., producers of the system, system vendors, users of the system, etc.).
In most contexts, semantic notions appear mixed with procedural notions. Typically, rules tend to be expressed with reference to some semantic properties and some procedural rules. The intended meaning of such expressions may for example be that certain procedures should be applied in order to ascertain (with some level of confidence) that the semantic rules are fulfilled. In the smart metering example, the rule
\textsuperscript{14} This is Alan Turing’s famous result from 1936.
\textsuperscript{15} The term “procedural” is not to be understood in a legal sense here but refers rather to what is known as “engineering procedures”.
“Without explicit approval by the consumer, all data-gathering and use is restricted to the bare minimum required for the energy system to work”
can be understood as a mixture of procedural and semantic notions: The approval of the consumer is a procedural idea (e.g., the consumer has filled in and signed a certain form), and the reference to data-gathering is a semantic notion referring to system behaviour. In general, a basic problem faced in certification of compliance can be understood as one of procedural approximation of semantic truth:
- To define a set of procedural rules that ensure with some reasonable level of confidence that a set of semantic rules are likely to be fulfilled.
Even if checking compliance with a set of procedural rules may be theoretically possible, perfect compliance checking could still be a practically unattainable goal, depending on the scenario. For example, in some scenarios, complete certification of compliance could require inspection of the entire technology stack, from the software application all the way down through the systems level and through the levels of hardware. In the smart metering example, we have a scenario spanning many layers including user-level software (websites, apps etc.), communication hardware and software, service- and provider-side server software, sensor software and hardware. At any level in this network of subsystems one could theoretically imagine sources of violations of regulation for any number of reasons (malice, inattention, incompetence, software bugs, etc.).
Automated compliance
The goal of compliance certification as understood and pursued here is to enable reasonable levels of trust at reasonable levels of cost obtaining certification. A reasonable level of trust may be one that ensures that trust violations have a high probability of incurring high cost (either in terms of operationalisation or in terms of penalty) on violators. Achieving reasonable levels of automated compliance may be considered a tool towards realising a “monumental goal”16.
Automated compliance may refer to automation of different aspects of compliance certification, including
- **Construction.** Certifying the application of compliance-by-design rules.
- **Verification.** Certifying that compliance of a system is verified, validated, tested.
- **Procedures.** Certifying that compliant engineering and operating procedures are in place (effective procedures are defined and in use).
---
16 See Frison-Roche, Gouriet, Tardieu: *Compliance and consequences on the Gaia-X labeling framework.*
Motivations and goals for automated compliance include:
- Trust
- Scalability and efficiency
- Cost reduction
These goals may be mutually conflicting. For example, implementing a monitoring system to achieve automatic compliance checks at run-time may be costly. A central objective is:
- To analyse and structure the design space of automated compliance with the objective of identifying points in the space that may ensure reasonable levels of trust at reasonable cost.
Trust levels may vary, and cost levels to obtain certification may vary accordingly. This is part of the rationale behind Gaia-X Labels.
Judging from a broad orientation in state-of-the-art methods of automation (see Taxonomy below), it is to be expected that automation of compliance will have to be composed from a mix of technological components including:
- Compliance by design
- Compliance by testing
- Compliance by monitoring
As with security, an important aspect of implementing compliance in practice is “compliance culture” referring to, broadly speaking, the human factor (human behaviour and culture), defining values, competencies, training, cultivating awareness, near miss reporting, breach reporting, continuous improvement, etc. Although we do not here understand this aspect directly as a technological component per se, it must be regarded as a possibly necessary component in implementing compliance in practice. Certain forms of automation may require certain aspects of culture in order to be effective. An example (from security) is the use of passwords and authentication technologies which may be rendered ineffective in the absence of a suitable culture (e.g., of creating strong passwords). In addition, the cultural aspect may be a target for automation in providing support for human-based processes (for example, partially automating a process of reporting).
Compliance can be assessed in various phases of the life cycle of a system: from conception to design, engineering and deployment, operation and maintenance. These phases split into two major parts: before the actual use (ex-ante: before the fact) and during operation (ex-post: after the fact). Ex-ante compliance is an important legal concept\(^{17}\), the technological counterpart of which may be understood as “compliance by design and by
testing”. This concept is central to any regulatory compliance system\textsuperscript{18} that validates a new solution before “entry in the market”. And in most cases the regulator imposes an ongoing surveillance, or monitoring, when solutions are sold and in actual use, requiring mechanisms for reporting or alerting, corrective and preventative actions, or continuous improvement. Procedures of ex-ante regulation and compliance are typically contrasted with ex-post procedures\textsuperscript{19}.
Currently, Gaia-X compliance is centred around architectural concepts, self-descriptions, the extension of verifiable credentials via linked data, and the concept of labels. Naturally, there are still some open issues within the current scope of Gaia-X Compliance and the Gaia-X Trust Framework, some of which are important for automation. Some technical issues will be associated with legal issues\textsuperscript{20}.
Some central questions for furthering automated compliance in the Gaia-X architecture include:
- What are main open technical and legal issues in the current design and within the current scope of Gaia-X Compliance and the Gaia-X Trust Framework?
- Which currently defined areas of Gaia-X Compliance should be prioritised for automation? Which are the exact computational problems underlying those areas?
- How to automate the extension of compliance properties “upwards” into the software layers so that participants (including in particular, application or service developers) can obtain compliance certification at reasonable cost.
Compliance automation and legal notions of compliance
There are at least three broad dimensions of compliance: socio-political, legal, and technological. We consider here some general points pertaining to the interface between compliance technology and the legal dimension, in particular with a view to the topic of automation of compliance.
It is important to be aware that no level of automation of compliance can replace jurisprudence or the human factor essentially involved in the legal dimension of compliance, which is a new branch in legal systems\textsuperscript{21}. It has been pointed out above that, already for purely mathematical reasons of computability, compliance properties cannot in general be fully automated. From a legal perspective, a similar conclusion follows, but for different reasons. Compliance cannot be fully automated, because jurisprudence and the
\textsuperscript{19} https://mafr.fr/en/article/ex-ante-ex-post2/
\textsuperscript{20} An example provided by P. Gronlier: It is foreseen that trust extension can happen automatically by extending a key chain or signing a new key pair with an existing eIDAS key. Even if the original eIDAS-signature is legally binding, it may be an open question whether the machine-generated key pairs and signatures are legally binding, as of the current legal situation.
legal system cannot be fully automated. One cannot under current jurisprudence imagine judges being substituted by algorithms: Judges constitute, by law, an essential human factor in the legal system. Apart from the fact that this state of affairs is grounded in law itself, it is also understandable from considerations of the limitations of technology. Thus, for example, it is an essential task of a judge to apply the law according to its “spirit”. We do not have (and perhaps will never have) access to technology which would enable automation of that kind of reasoning.
Even if we restrict attention to specific compliance properties which could, in principle, be completely automated (for example, verifying that the data flow between a smart metering system in a home and a server is properly encrypted), that still would not completely eliminate the human and political aspects of compliance from the legal perspective. For example, it could always happen that compliance of a system is contested by a stakeholder. This could even happen by the compliance check itself being contested (the verification is disputed for being erroneous or incomplete). Such cases could end up in court and hence before a human judge. The situation could also be reversed by a political or regulatory body in a different policy spirit, which algorithms cannot catch.
In view of the foregoing considerations, the following appears to be a useful general formulation of the goals of automated compliance vis-à-vis its legal implications:
- Automated compliance procedures and algorithms should produce evidence (e.g., traces, logs, certificates, facts, proofs, etc.) of legal relevance, including such evidence that is necessary before regulatory and supervisory bodies and courts because the burden of proof is on the stakeholders in the market.
Correct understanding of this statement includes a number of aspects, which are supported by the foregoing analysis:
- Evidence includes a range of formal artefacts depending on the case at hand. For example, facts can be produced by archiving measurements, results, documents or any type of digital transactions, traces (run-time logs) can be produced by monitoring procedures, certificates could be test results (possible aggregated and abstracted) produced by certified testing procedures, proofs could be handmade proofs of compliance of algorithms of limited scope and/or their implementations (for example: correctness of an encryption algorithm and/or its implementation), or proofs could be machine-generated formal proofs that can be formally checked.
- Legal relevance is open to interpretation and can mean different things depending on the case at hand. It should be seen as part of the effort towards automated compliance to clarify, to the extent possible, legal implications of the evidence produced by automated compliance procedures.
---
22 One can have philosophical discussions about whether future AI-technologies might reach the level of human common sense reasoning, but we forego such discussions here.
The consideration of the legal dimension of compliance raises questions, including liability questions pertaining to automated compliance tools. If Gaia-X wants to offer automated compliance tools to stakeholders, the legal implications of doing so (including questions of contracting, agreements, and liability) will need careful scrutiny.
The Gaia-X Trust Framework
Summary of the main technical concepts of the current state of the Gaia-X Architecture entering into the Gaia-X Trust Framework, which are relevant for understanding potential automation within the current scope of the framework.
In order to relate in more detail to the technical work in Gaia-X, we briefly summarise the main technical concepts of the current state of the Gaia-X Architecture\(^{23}\) entering into the Gaia-X Trust Framework\(^{24}\) which are relevant for compliance automation. These concepts and frameworks are subject to change, and the following should be understood as a snapshot.
The Gaia-X Trust Framework uses verifiable credentials and linked data to build a FAIR knowledge graph of verifiable claims from which additional trust and composability indexes can be automatically computed\(^{25}\).
The Gaia-X Trust Framework builds on Gaia-X Self-Description files following the W3C Verifiable Credentials Data Model, for describing entities in all relevant participant roles\(^{26}\) of the Gaia-X Architecture\(^{26}\), including Consumer, Provider, Federator, Resource, Service Offering. Gaia-X Self Descriptions may be endowed with a taxonomy and an inheritance structure\(^{27}\). Relations between Gaia-X Self Descriptions may be specified by RDF-triples thereby giving rise to a Self-Description Graph\(^{28}\). This graph may be extended by so-called edge properties endowing the edges of the Self-Description Graph with additional attributes besides their type such as the origin of the claim, the issuer, and others\(^{29}\).
The Gaia-X Trust Framework works with four types of rules pertaining to: serialisation format and syntax, cryptographic signature validation and validation of the keypair associated identity, attribute value consistency, and attribute veracity verification. The Gaia-X Trust Framework is defined\(^{30}\) as the process of going through and validating the set of automatically enforceable rules to achieve the minimum level of Self-Description compatibility in terms of:
- syntactic correctness
- schema validity
\(^{24}\) Gaia-X Trust Framework 22.04 Release.
\(^{25}\) Gaia-X Trust Framework 22.04 Release, p. 3.
\(^{27}\) Gaia-X Architecture Document 22.04 Release, 4.2.
\(^{29}\) Gaia-X Architecture Document 22.04 Release, 5.4.1.
\(^{30}\) Gaia-X Architecture Document 22.04 Release, 6.3.
- cryptographic signature validation
- attribute value consistency
- attribute value verification
Whenever possible, the verification of Self-Descriptions’ attribute values is done either by using publicly available open data, and performing tests or using data from Trusted Data Sources. This verification is captured using Verifiable Credentials issued by either of the following Trust Anchors:
- the Gaia-X association when performing live tests
- the owner of the Trusted Data source
Furthermore, it is expected that checking the validity of Self-Descriptions using open data and test data will introduce costs.
**Trust anchors** are Gaia-X endorsed entities responsible for managing certificates to sign claims, which are assertions appearing in Self-Descriptions.31 To be compliant with the Gaia-X Trust Framework, all keypairs used to sign claims must have at least one of the endorsed Trust Anchors in their certificate chain. At any point in time, the list of valid Trust Anchors is stored in the Gaia-X Registry. Gaia-X builds on eiDAS for electronic identification, authentication and trust services. The Gaia-X Association defines:
- the sets of rules to define the Trust Anchors:
- Trust Service Providers.
- Gaia-X Label Issuers
- Trusted data source for Gaia-X Compliance
- the format of the Self-Descriptions and their compliance rules
- the Gaia-X Labels rulebook.
Currently, in the Gaia-X Architecture Document, **Gaia-X verification** refers to validating signed claims using the Gaia-X Trust Framework.33
**Gaia-X Labels**34 is the Gaia-X concept for optionally extending compliance beyond the standard core level of **Gaia-X Compliance**. Technically, a Gaia-X label is a W3C Verifiable Credential. A **Gaia-X Label** is a key component of the Gaia-X Trust Framework, which has as its stated goal:
---
34 Gaia-X Labelling Framework
the development of “a Compliance- and Labelling-technological framework automating all the tests and verifications needed to give a service a specific Label”\textsuperscript{35}.
The relation between Gaia-X Labels and Gaia-X Compliance is clarified as follows\textsuperscript{36}:
- **Gaia-X Compliance** is defined as “the process of going through and validating the set of automatically enforceable rules to achieve the minimum level of Self-Description compatibility in terms of file format and syntax, cryptographic signature validation, attribute value consistency and attribute value verification” (Technical Architecture Document - TAD, 21.09). In that sense, Gaia-X Compliance ensures that the required level of information for users to make decisions is available, and that such information is verified or verifiable. Gaia-X Compliance specifies conditions for a Provider, as well as for the Service Offerings proposed by such a Provider.
- **Gaia-X Labels** “ensure that a predefined set of policy and technology requirements are met (PRD, 21.04). From a technical perspective, Labels are the result of the combination of verified “Self-Description compliant attributes, that individually would be insufficient to support business or regulatory decisions.” (TAD, 21.09).”
Gaia-X Labels are currently organised in 3 progressive levels, defined by a set of compliance criteria. The criteria that define the different levels are defined in detail in the Gaia-X Labelling Criteria Catalogue\textsuperscript{37}.
Gaia-X Labels provide a means of abstraction and aggregation for compliance credentials. Using Labels, compliance credentials can be automatically found, linked, aggregated, and transitively extended. Because Gaia-X Labels hide possibly complex compliance properties behind the labels, the concept of labels potentially supports essential technical opportunities for modularisation and separation of concerns for compliance automation.
In addition to Gaia-X Compliance and the Gaia-X Trust Framework the **Gaia-X Policy Rules Document**\textsuperscript{38} contains policy rules, which define “high level objectives safeguarding the added value and principles of the Gaia-X ecosystem. To allow for validation, the high-level objectives are underpinned by the actual requirements of the suitable criteria catalogues, as further specified in the Gaia-X Label and Trust Framework documents.”
\textsuperscript{35} Gaia-X Trust Framework 22.04 Release, p. 2.
\textsuperscript{36} Gaia-X Trust Framework 22.04 Release, p. 2.
\textsuperscript{37} Gaia-X Trust Framework 22.04 Release, p. 4.
Taxonomy of automation methods
The following reasoned taxonomy describes some major, currently accessible methods which involve, or may be developed to involve, a significant degree of automation of relevance to Gaia-X compliance. It is not intended to be exhaustive but may be taken as a starting point for future work towards understanding automated compliance technologies in relation to the goals of Gaia-X.
In developing technology for compliance automation for Gaia-X it is important to structure the technical design space. Different technical approaches need to be accompanied by reasoned assessments pertaining to their pro’s and con’s and their relevance to Gaia-X notions of compliance. The degree to which automation of compliance is currently possible may depend significantly on which aspects of systems and which technical approaches to automation are considered (see text accompanying each item below). The following reasoned taxonomy of automation methods is based on technological aspects of approaches or systems, which are generally not mutually exclusive. For example, most software-based methods currently have elements of testing, or monitoring, or both. The following taxonomy is not intended to be exhaustive but may be taken as a starting point for future work towards understanding automated compliance technologies in relation to the goals of Gaia-X.
Linked data
Linked data is probably the most important structure for trust chaining as of the current state of design in Gaia-X. The main problem solved by linked data is to provide the technological basis for creating a graph of transitive, verifiable claims, thereby enabling the computation of chains of trusted credentials extended from trusted sources and their self-descriptions (ultimately, in Gaia-X terms, Trust Anchors). The linked data approach provides mainly procedural certification. The linked structure as such does not itself provide any semantic compliance guarantees, providing a structure for extending trust from trusted sources. The semantic significance of the linked structure depends on the semantic conditions for obtaining credentials, which may, for example, involve tests or monitoring.
Architecture-based methods
The architecture of a system defines the overall design and broad structure of the system and determines many aspects (e.g., stakeholders, types of components, communication infrastructure and topology, data flow, protocols, etc.). Architectural concepts must therefore be at the basis of any operational compliance system and are necessary instruments for achieving compliance by design or ex-ante compliance. Gaia-X Compliance as well as the Gaia-X Trust Framework are grounded in and structured by the Gaia-X Architecture.
Trusted components and trust extension
An interesting area of research and development concerns the idea of automatically extending compliance properties from trusted components into the software application layer (e.g. apps, services). Methods for operationalising such trust extension could open the door to provisioning of trusted component repositories for application developers. A key question to be addressed is:
- How to certify that the way trusted components are used in a software application provides ground for trust extension to the application (or parts thereof).
This question may be addressed with techniques based on testing, monitoring, compilation, languages (DSLs) etc. It is possible that component structure may help automation. For example, may be useful in assembling systems from trusted components according to architectural patterns, such that, for example, certifying monitors and tests become available automatically.
Testing
Together with monitoring-based methods (see below) test-based methods are among the most important currently deployable techniques for partially checking semantic properties of software systems. Test-based methods are necessarily semantically incomplete, since they can only cover finitely many behaviours at any given time. Many (if not most) algorithms and programs are specified to (or supposed to) work correctly over all of infinitely many possible inputs. Example: an algorithm to compute the square root of natural numbers must work correctly on all (infinitely many) numbers. But any test can only run a program on finitely many inputs in finite time. Testing can therefore in general only falsify correctness properties with certainty: If a program is incorrect, then this must manifest itself on some input, and a test executing the program on that input can reveal beyond reasonable doubt that this is so. In contrast, verifying a correctness property may require infinitely many tests and may hence not be testable in finite time. Still, testing is of paramount importance in practice. Testing is of central importance for extending trust and compliance into the software layer. Testing can be very costly in terms of engineering effort. Developing effective test strategies (with reasonable coverage of relevant properties) can be costly.
An interesting idea for research and development in the context of Gaia-X Compliance is
To design certified test repositories for various architectural components (e.g. data connectors) together with automated means of test deployment, thereby lowering the cost of testing while heightening the level of trust.
To design templates and tools that simplify the assessment of test coverage of a component, an application, or an entire system (e.g. percentage of code covered by test cases, percentage of identified behaviours covered, etc).
Monitoring
In addition to test-based methods monitoring-based methods are among the most important currently deployable techniques for partially checking semantic properties of software systems. Like test-base methods, monitoring-based methods are necessarily semantically incomplete, since they can only cover finitely many finite execution traces of a system at any given time. Therefore, monitoring may only provide partial coverage for compliance requirements. Nevertheless, using a risk-based approach, monitoring specifications are typically implemented for the highest risk aspects of a solution. Some properties of programs and systems can be monitored at run-time (see below), and run-time monitoring of a system can ensure that no actual execution of the system violates such properties. Emerging technologies of interest in the area of monitoring include the use of machine learning techniques, for example, to help identify anomalies of system behaviour.
Monitors can often be related to formal specifications (for example, regular expressions) of classes of properties (for example, so-called safety properties) and can in some cases be derived automatically from them (for example, finite state machines derived from the specification of a safety property). Run-time verification refers to verification of execution traces using monitors. Challenges for monitoring-based methods include the fact that monitors may change the system under observation, since monitors must typically be implemented by instrumenting the system under observation with additional code. A monitoring system needs to be secured to avoid tampering, as malicious parties might change monitoring logic to filter out signals that are undesirable to them, or to change alert thresholds, or falsify the monitoring information all together. Also, monitors may incur runtime overhead incurring performance degradation of the system. Finally, instrumenting a system with monitors may be costly. If the system changes, the monitoring system may have to change accordingly. Continuous service certification can therefore be challenging.
Compliance-as-code
Compliance-as-code does not purport to reduce all aspects of compliance to code (which, as we have seen, would be claiming to do the impossible). Rather, compliance-as-code
---
refers to a notable recent and emerging movement in the software systems engineering field, following onto the various “X-as-code” or even “everything-as-code” movements (such as infrastructure-as-code, data-as-code), which is often positioned as a natural further development of DevOps-approaches. Several companies are offering various solutions marketed under the heading.
The basic idea, in terms of currently available technology, is to provide systems tools for representing and operationalising compliance rules as tests or monitors or both, and possibly using information obtained from them to generate audit reports.
Compliance-as-code as it is currently realised can therefore be seen as a way of using test-based methods and monitoring-based methods to translate compliance and policy rules into software-based automated compliance checks, and therefore pros and cons of those methods can be expected to be inherited. The main innovation contributed so far by compliance-as-code approaches appears to lie in automation towards closing the gap between compliance rule systems and available methods of testing or monitoring.
**Language-based methods**
Semantic guarantees on the behaviour of software systems can be obtained by the employment of programming languages (general purpose languages or DSLs\(^{40}\)) whose expressions are restricted to obey given rules which are enforced by the compiler. Programs written in such languages can be understood to obey these rules by construction. The paradigm has predominantly been developed in the research area known as *language-based security*\(^{41}\). The downside to such techniques is the restriction to or dependency on specific languages and their concomitant software environments including development environments, debuggers, compiler infrastructures, and libraries and frameworks. Some relatively light-weight forms of language-based security technologies have achieved industrial importance, the most prominent example probably being the Java Bytecode Verifier originally developed and promulgated by (then) Sun Microsystems back in the 1990’s, which transferred ideas from the academically developed theory of *type safety* into large-scale industrial practice. Higher-end technologies such as proof carrying code have been harder to push into practice, because they rely on highly expressive logical systems incurring high specification overhead and requiring complex logical algorithmic techniques that are often beyond industrial scope. Language-based techniques have notably been used for ensuring *information-flow security* (see Taxonomy: Information flow methods), which appears to be directly relevant for certifying advanced properties such as data privacy. Other, related, directions of interest to compliance include *certified compilation* in the area of compiler verification. A notable long-ranging research project here is the project CompCert\(^{42}\).
---
\(^{40}\) DSL = Domain Specific Language: a computer language specialized to a specific application domain.
\(^{41}\) https://en.wikipedia.org/wiki/Language-based_security
\(^{42}\) https://compcert.org/
Logic-based methods
Logic-based methods are the only known methods that can in principle lead to actual verification of software systems, usually involving formal proofs of program properties. They can cover infinite (unbounded) behaviours and infinite (unbounded) data structures. They cannot be fully automated, but significant progress has been made in later years regarding their scope and applicability (a spectacular modern development is the verification of complex mathematical results using the Coq proof assistant\(^{43}\)). Formal proofs are an ultimate form of verifiable certificate (checking a given formal proof can be done automatically, it is finding a proof that is hard). The downside to these methods is that they are still very difficult and costly to apply in general contexts outside of highly specialised application areas like, e.g., verification of cryptographic protocol implementations. Those very specialised areas may however be of some interest to Gaia-X Compliance. For example, concepts of Trusted Components (see Taxonomy: Architecture-based methods) might benefit from verification of certain specialised components. Although complete automation is impossible, practically interesting advances have been made in recent times (again, Coq is one of the leading systems). Most language-based methods can be understood as restricted logical techniques allowing for higher degrees of automation and ease of use. Logical attestation\(^{44}\) is an example of a logical approach to attestation.
---
\(^{43}\) [https://coq.inria.fr/](https://coq.inria.fr/)
Conclusion
Automation is needed
Regulation with respect to digital sovereignty is increasing at rapid pace in response to societal concerns that are central to European values and to Gaia-X. Increasing levels of regulation lead to the need for corresponding procedures for achieving and enforcing regulatory compliance (ex-ante or ex-post). Automation of compliance has inherent technical limitations, and compliance is embedded in a societal context which essentially involves the human and political factor (for example, jurisprudence). Still, automation as far as possible is needed for reasons of trust, scalability, and efficiency.
The need for automation of compliance procedures grows both with the volume and complexity of regulation and with the ever increasing complexity of systems subject to regulation. Compliance automation may benefit all stakeholders. From the courts’ and the regulators’ perspective, it is a reasonable concern that achieving and enforcing compliance of systems with regulatory policy may become ineffective or unrealistic, unless corresponding levels of automation of compliance (ex-ante and ex-post) are reached. From the perspective of providers of systems, it is a concern that providing systems to an increasingly regulated market place becomes increasingly difficult, unless corresponding tools for achieving compliance are accessible.
Part of the effort of automated compliance is to understand and as far as possible to specify which regulations are covered to which degree by specific compliance tools. Possible gaps between regulations and compliance procedures and tools should be identified as far as possible.
R&D towards automated compliance is needed
Just as security is by now a recognised area of research and development in computer science and related fields, the field of compliance tools and algorithms needs to be seen as a strategic subject of research and development to help fill the gap between regulation and systems subject to regulation. Currently, the gap between R&D resources invested in the creation of systems in need of compliance assessment on the one hand, and R&D resources invested in the creation of tools for achieving or enforcing compliance on the other hand, is disproportionate. The gap between the foreseeable amount of regulation on the one hand, and the R&D resources available to increase both understanding and automation of their implications for compliance, is becoming disproportionate.
Legal implications should be specified
It is up to the regulator to decide which tools may be used for compliance and how. Algorithmic compliance procedures and processes should produce **evidence of legal relevance** in assessing whether a given system is compliant with a set of rules. The legal implications of evidence produced by such procedures should be clarified *a priori* so far as possible. Relevance may both pertain to ex-ante properties and ex-post enforcement. Relevance may pertain to multiple stakeholders, including courts and judges, contract- and sla-management, and citizens. Furthermore, the legal implications and contractual circumstances of compliance tools provided by Gaia-X should be understood, for example with regard to legal commitment, responsibility, and liability.
Automation is needed for Labels
Gaia-X Labels constitute a (mostly ex-ante) instrument for creating levels of compliance and certification. Labels are distinct from the core notion of compliance given by the Gaia-X Trust Framework. Labels may go beyond the common standardised core of compliance regulation (such as found in the Gaia-X Trust Framework) at any given time, and the Label system and corresponding levels of compliance and certification may develop over time. The degree of automation associated with a Label may develop over time, for instance, as a result of new compliance technology being invented or implemented. But Labels should always be associated with formally stated requirements and should be subjected to automated compliance checking so far as possible.
|
{"Source-Url": "https://gaia-x.eu/wp-content/uploads/2022/12/Automated_Compliance.pdf", "len_cl100k_base": 9926, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 43736, "total-output-tokens": 12104, "length": "2e13", "weborganizer": {"__label__adult": 0.0007305145263671875, "__label__art_design": 0.0011911392211914062, "__label__crime_law": 0.0223846435546875, "__label__education_jobs": 0.0024433135986328125, "__label__entertainment": 0.00014698505401611328, "__label__fashion_beauty": 0.00030994415283203125, "__label__finance_business": 0.0046539306640625, "__label__food_dining": 0.0004687309265136719, "__label__games": 0.0012722015380859375, "__label__hardware": 0.002849578857421875, "__label__health": 0.0006985664367675781, "__label__history": 0.0007996559143066406, "__label__home_hobbies": 0.000225067138671875, "__label__industrial": 0.0014820098876953125, "__label__literature": 0.000949382781982422, "__label__politics": 0.004711151123046875, "__label__religion": 0.0006465911865234375, "__label__science_tech": 0.19677734375, "__label__social_life": 0.00020253658294677737, "__label__software": 0.079345703125, "__label__software_dev": 0.67626953125, "__label__sports_fitness": 0.0002428293228149414, "__label__transportation": 0.00103759765625, "__label__travel": 0.0002474784851074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55552, 0.01953]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55552, 0.49031]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55552, 0.9139]], "google_gemma-3-12b-it_contains_pii": [[0, 72, false], [72, 1809, null], [1809, 5124, null], [5124, 7662, null], [7662, 11138, null], [11138, 13222, null], [13222, 16480, null], [16480, 19421, null], [19421, 22006, null], [22006, 24408, null], [24408, 27635, null], [27635, 30704, null], [30704, 31045, null], [31045, 33602, null], [33602, 35646, null], [35646, 38318, null], [38318, 40517, null], [40517, 43474, null], [43474, 46489, null], [46489, 49702, null], [49702, 51489, null], [51489, 53969, null], [53969, 55552, null]], "google_gemma-3-12b-it_is_public_document": [[0, 72, true], [72, 1809, null], [1809, 5124, null], [5124, 7662, null], [7662, 11138, null], [11138, 13222, null], [13222, 16480, null], [16480, 19421, null], [19421, 22006, null], [22006, 24408, null], [24408, 27635, null], [27635, 30704, null], [30704, 31045, null], [31045, 33602, null], [33602, 35646, null], [35646, 38318, null], [38318, 40517, null], [40517, 43474, null], [43474, 46489, null], [46489, 49702, null], [49702, 51489, null], [51489, 53969, null], [53969, 55552, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55552, null]], "pdf_page_numbers": [[0, 72, 1], [72, 1809, 2], [1809, 5124, 3], [5124, 7662, 4], [7662, 11138, 5], [11138, 13222, 6], [13222, 16480, 7], [16480, 19421, 8], [19421, 22006, 9], [22006, 24408, 10], [24408, 27635, 11], [27635, 30704, 12], [30704, 31045, 13], [31045, 33602, 14], [33602, 35646, 15], [35646, 38318, 16], [38318, 40517, 17], [40517, 43474, 18], [43474, 46489, 19], [46489, 49702, 20], [49702, 51489, 21], [51489, 53969, 22], [53969, 55552, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55552, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8c6b49cded28c6ad774ca2852bee9eb03e0cbb1c
|
ABSTRACT
The TABULATE procedure in SAS® provides a flexible platform to generate tabular reports. Many beginning SAS programmers have a difficult time understanding the syntax of PROC TABULATE and tend to avoid using the procedure. This tutorial will explain the syntax of PROC TABULATE and, with examples, show how to grasp the power of PROC TABULATE. The data used in this paper represents simulated consumer credit card usage data and the code was developed using SAS 9. This updates the paper presented at NESUG in 2001 with the same title.
INTRODUCTION
PROC TABULATE is based on a table generation code developed by the U.S. Department of Labor. The syntax does not resemble other SAS PROCS and will appear to be cryptic to the beginning SAS programmer. To make matters worse, looking at sample TABULATE code provides no clue for the novice TABULATE user. Sample code often looks like long mathematical equations that make no sense to the individual learning TABULATE. Frustration often leads to dismissal of the procedure. The TABULATE procedure is so flexible that there is no single correct way of generating tabular reports. The method I propose in this paper is the way I have learned TABULATE and still use it to generate custom tabular reports. I teach this method to all new users of TABULATE and find it helps them grasp the procedure. Here are my 9 steps to generate tabular reports using PROC TABULATE:
1. Don't panic. Relax and take a few meditative breaths.
2. Design the report on paper. The design of the TABULATE report should be first specified on paper. The report design that is developed on paper will guide the code generation.
3. Generate the initial test code. Return to the computer and start coding the design that you have formulated on paper. There are a few rules of TABULATE syntax that will generate the report and we will review these here.
4. Test, retest and verify using a small sample. Test the initial code using a small number of observations. Use a random sample or use an OBS= option to test the syntax. TABULATE is tricky and you may have to run a few versions of your code and possibly include some DATA preparation. Verify that the results make sense. Look at your data and report! Don't assume the report is correct just because you have no syntax errors. At this step, don't be too concerned with report appearance. Verify that the results are correct.
5. Clean up the appearance of the report. Once the code has generated a report that makes sense, clean up the output of the report using a few tricks we will review here. Also consider adding additional summaries and or statistics to make the report even more useful.
6. Run code with OBS=MAX.
7. Add some ODS functionality.
8. Need to generate multi-label formats? SAS8 introduced multi-label formats to be used with PROC TABULATE. An example is shown.
9. Sit back, smile and be proud of your report. Your manager may be confused at how you were able to produce the report without sorting the data or without any spreadsheet crunching. He/She may be confused at your code, but then you can take him/her through the 9 steps of TABULATE bliss.
DATA GENERATED FOR PRESENTATION
The data used for this paper was simulated with random variable functions. It represents results of a mailed Balance Transfer offer to existing customers of a consumer credit card. Code that generated the data is shown here:
```sas
proc format;
value offer low -0.45 = 'A' /* hypothetical BALANCE TRANSFER Offers */
0.45<high = 'B';
value $grroff 'A' = 'aoff' 'B' = 'boff';
value aoff low-0.10 = '1' other = '0'; /* Response rates */
value boff low-0.05 = '1' other = '0';
value $mline 'A' = '6500' 'B' = '5400'; /* Average Balance Transfer */
data test;
do campaign = '2004/3', '2004/4'; /* campaign quarters */
do i = 1 to 1e6;
mailed=1;
offer=put(ranuni(12),offer.);
fmtuse=put(offer,$grroff.);
respond=input(putn(ranuni(14),fmtuse),best12.);
if respond then baltran=rannor(15)*500+input(put(offer,$mline.),best12.);
else baltran=.;
output;
end;
end;
run;
```
Some definition of variables for above code:
- **Mailed:** Each prospect is mailed. Needed for some calculations in TABULATE.
- **Offer:** Offer mailed to consumer (A or B).
- **Respond:** 1=respond, 0=non-respond.
- **Baltran:** Balance Transfer Amount.
**STEP 1 – DON’T PANIC**
SAS code generation can be frustrating to beginning SAS users. To begin code generation get into a relaxed and calm mode. The design and code may take a few iterations, but that should be expected with TABULATE. Table generation and formatting will take a bit of code work and re-work. It is to be expected that the first TABULATE code generated is not the final version.
**STEP 2 – DESIGN THE REPORT ON PAPER**
The structure of the report must first be specified in order to generate the TABULATE code that will produce the report. Designing the report on paper will help you identify which variables to use, how variables are defined in TABULATE and what summary information is required.
As an illustration for this paper, let’s generate a summary report for the above data. We wish to look at response rates, average balance transfer per responder and per mailed by offer and campaign. There are many ways of setting up the report. Here is one way which we will code up:
<table>
<thead>
<tr>
<th>Campaign</th>
<th>Offer</th>
<th>Mail Base</th>
<th>% Respond</th>
<th>Response Rate</th>
<th>Average BT per responder</th>
<th>Average BT per mail base</th>
</tr>
</thead>
<tbody>
<tr>
<td>2004/3</td>
<td>A</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>B</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>TOTAL</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2004/4</td>
<td>A</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>B</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>TOTAL</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
To proceed to the next step, we have to familiarize ourselves with some syntax rules of PROC TABULATE.
**STEP 3 – GENERATE THE INITIAL TEST CODE**
There are few syntax rules that one has to get familiar with in order to use PROC TABULATE. The first step is to identify if variables are classified as “CLASS” variables or “VAR” variables.
CLASS variables are those that are categorical and/or have levels that you wish to generate summaries for each level. CLASS variables can be either numeric or character.
VAR variables identify analysis variables that you wish to report statistics for. A variable cannot be listed as both a CLASS and a VAR variable in the TABULATE procedure. A VAR statement is not required in the TABULATE procedure.
Table dimensions are specified in the TABLE statement of PROC TABULATE. Table dimensions (page, row, and column) are separated by commas. This is different from PROC FREQ where an asterisk is used to separate table dimensions.
If the TABLE statement does not include any commas, the resulting report will be a single row with multiple column output. A TABLE statement with one comma produces a row by column report with the row dimension specified first. The maximum number of commas is 2, producing a page by row by column report.
The restriction on the number of commas does not limit the number of CLASS variables you can include in your report. Further crossing of CLASS variables can be introduced with an asterisk. The asterisk is also used to specify VAR variables, statistical summaries and/or format of output.
Statistical summaries are similar to the ones available in PROC MEANS. These include sums, means, number of records, etc. that one sees in other PROCS. SAS8 and higher also include percentile statistics such as the ones available in PROC UNIVARIATE and MEANS.
The TABULATE procedure also includes a PCTN and a PCTSUM statistical summary. Syntax may be difficult to grasp for these summaries providing more reason to highlight the importance of STEP 4 (test and retest). PCTN calculates the percent of a frequency and PCTSUM calculates the percent of sums. These may require denominator definitions that are specified between brackets <>. Row and column percents are also available to include in the report.
So now we have to worry about CLASS and VAR variables on top of considering where to use commas, asterisks and brackets. And, to make it even more confusing we can even include an ‘=’ within the body of the TABLE statement to specify formats and labels to be printed. Wow, no wonder TABULATE code can look cryptic. After much practice, these will become second nature. In the meantime, follow these guidelines:
**CLASS**: Used for categorical variables or variables for which you want to see summaries for each value of the variable. Default statistic applied to a class variable if none provided is N (number of observations). A PCTN statistic can also be specified to produce frequency percents.
**VAR**: Used to specify analysis variables that you want generate statistics for each level of CLASS variables. The default statistic if none specifies is SUM.
, : Used for dimension splits; page, row, column. Dimension splits do not have to be limited to CLASS variables. You can, for example, have a CLASS dimension as your row dimension and a VAR variable as your column dimension.
*: Use to specify any of the following:
o Another CLASS variable Split
o A VAR variable
o A statistic
o A format
<>: Specify the denominator CLASS dimensions for PCTN or to specify denominator VAR variable when using the PCTSUM statistic.
(): Use to group CLASS or VAR variables
=: For format and label specification.
Let's try some code using our hypothetical example. Let us build the table in stages. We will first start to look at mail base and responder frequencies.
```sas
proc tabulate data=test;
class offer campaign;
var respond;
table campaign*offer
,
n='Mailed'*f=comma9.
respond*f=comma9.
;
run;
```
Note that the table statement can be listed on one line, but listing variables and dimensions on separate lines can be useful for debugging. Output is shown on the next page.
We see that the default statistic for variables specified in the VAR specification is SUM. We did not have to specify the SUM statistic in the TABLE statement. Note that we still don’t see the totals by campaign. These will be addressed in the next section.
STEP 4 – TEST, RETEST AND VERIFY USING A SMALL SAMPLE
PROC TABULATE often requires a number of iterations and tests until the code generates results that we expect. To save time, it is often advisable to run tests on a sample data set instead of the complete data set. It may take a number of runs to get a table that has all the statistics that one requires. During this step we can make changes in the table output and we may wish to redraw each modification on paper to get a clear picture of how to proceed in code generation.
ADDING TOTALS
We notice that the above table does not include totals for each campaign. These can be added by including an ALL specification in the TABLE statement. Code and output is shown:
```sas
proc tabulate data=test;
class offer campaign;
var respond;
table campaign*(offer all)
,
n='Mailed'*f=comma9.
respond*f=comma9.
;
run;
```
Experiment with the ALL statement in the first line of the TABLE statement. See what happens when we make these changes:
- campaign*offer all
- (campaign all)*offer
- (campaign all)*(offer all)
**ADDING PERCENTAGE STATISTICS WITH PCTN**
The PCTN statistic generates the required row percent. This statistic also may require a denominator definition that is placed inside the <> . Setting up this definition can be complicated and may require a number of iterations to get the correct output. The values entered inside the <> are class variables that make up the dimension that one wishes to see a percentage. The row dimension is required to see row percents and column dimensions are required to see a column percents. Sounds confusing so keep testing until the numbers look correct. If you expect row percents, verify that the numbers reported are indeed row percents.
Here are the row percents for mail quantity:
```sas
proc tabulate data=test;
class offer campaign;
var respond;
table campaign*(offer all)
,
n='Mailed'*f=comma9.
pctn<offer all>
respond*f=comma9.
;
run;
```
Now let's add the other components of the table. For the response rate, we can use the MEAN statistic since the `respond` variable is binary. For balance transfer per responder we can use the MEAN function as well since the non-responders received a missing value for `baltran` variable. Note that we format the response rate with a `percentw.d` format and use a `dollarw.d` format for the balance transfer amount. Code and output is illustrated:
```sas
proc tabulate data=test;
class offer campaign;
var respond baltran;
table campaign*(offer all)
,
n='Mailed'*f=comma9.
pctn<offer all>
respond*f=comma9.
respond*mean*f=percent9.2
baltran*mean*f=dollar9.0
;
run;
```
### ADDING PERCENTAGE STATISTICS WITH PCTSUM
To calculate the average balance transfer for all accounts mailed is a bit tricky. We can use the PCTSUM statistic which takes the sum of a numerator variable and divides by the sum of a denominator variable defined in "<". The result is multiplied by 100, which we do not require here but can correct for with a PICTURE format. Here is the code and output:
```sas
proc format;
picture btm (round) low - high = '0,000,009' (prefix='$' mult=.01);
;
proc tabulate data=test;
class offer campaign;
var respond baltran mailed;
table campaign*(offer all)
,
n='Mailed'*f=comma9.
pctn<offer all>
respond*f=comma9.
respond*mean*f=percent9.2
baltran*mean*f=dollar9.0
baltran*pctsum<mailed>*f=btm.
;
run;
```
The output is as follows:
<table>
<thead>
<tr>
<th>campaign</th>
<th>offer</th>
<th>Mailed</th>
<th>PctN</th>
<th>Sum</th>
<th>Mean</th>
<th>Mean</th>
<th>baltran</th>
</tr>
</thead>
<tbody>
<tr>
<td>2004/3</td>
<td>A</td>
<td>450,137</td>
<td>45.01</td>
<td>45,288</td>
<td>10.06%</td>
<td>$6,498</td>
<td></td>
</tr>
<tr>
<td></td>
<td>B</td>
<td>549,863</td>
<td>54.99</td>
<td>27,748</td>
<td>5.05%</td>
<td>$5,402</td>
<td></td>
</tr>
<tr>
<td></td>
<td>All</td>
<td>1,000,000</td>
<td>100.00</td>
<td>73,036</td>
<td>7.30%</td>
<td>$6,082</td>
<td></td>
</tr>
<tr>
<td>2004/4</td>
<td>offer</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>A</td>
<td>449,678</td>
<td>44.97</td>
<td>45,082</td>
<td>10.03%</td>
<td>$6,504</td>
<td></td>
</tr>
<tr>
<td></td>
<td>B</td>
<td>550,322</td>
<td>55.03</td>
<td>27,484</td>
<td>4.99%</td>
<td>$5,400</td>
<td></td>
</tr>
<tr>
<td></td>
<td>All</td>
<td>1,000,000</td>
<td>100.00</td>
<td>72,566</td>
<td>7.26%</td>
<td>$6,086</td>
<td></td>
</tr>
<tr>
<td>campaign</td>
<td>offer</td>
<td>Mailed</td>
<td>PctN</td>
<td>respond</td>
<td>respond</td>
<td>baltran</td>
<td>baltran</td>
</tr>
<tr>
<td>----------</td>
<td>-------</td>
<td>--------</td>
<td>------</td>
<td>---------</td>
<td>---------</td>
<td>---------</td>
<td>---------</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Sum</td>
<td>Mean</td>
<td>Mean</td>
<td>PctSum</td>
</tr>
<tr>
<td>2004/3</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td></td>
<td>450,137</td>
<td>45.01</td>
<td>45,288</td>
<td>10.06%</td>
<td>$6,498</td>
<td>$654</td>
</tr>
<tr>
<td>B</td>
<td></td>
<td>549,863</td>
<td>54.99</td>
<td>27,748</td>
<td>5.05%</td>
<td>$5,402</td>
<td>$273</td>
</tr>
<tr>
<td>All</td>
<td></td>
<td>1,000,000</td>
<td>100.00</td>
<td>73,036</td>
<td>7.30%</td>
<td>$6,082</td>
<td>$444</td>
</tr>
<tr>
<td>2004/4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td></td>
<td>449,678</td>
<td>44.97</td>
<td>45,082</td>
<td>10.03%</td>
<td>$6,504</td>
<td>$652</td>
</tr>
<tr>
<td>B</td>
<td></td>
<td>550,322</td>
<td>55.03</td>
<td>27,484</td>
<td>4.99%</td>
<td>$5,400</td>
<td>$270</td>
</tr>
<tr>
<td>All</td>
<td></td>
<td>1,000,000</td>
<td>100.00</td>
<td>72,566</td>
<td>7.26%</td>
<td>$6,086</td>
<td>$442</td>
</tr>
</tbody>
</table>
Note that if you wanted to display dollars and cents in the last column of output, modify the PICTURE statement as follows:
```
picture btm (round) low - high = '0,000,009.00' (prefix='$' mult=1);
```
**STEP 5 – CLEAN UP THE APPEARANCE OF THE REPORT**
After the report has been validated, we can clean up the report presentation. There are a number of tricks to make the report look professional and they are described here with examples. Things we would like to change in the above report are:
- Eliminate some of the statistic labels and/or replace with descriptive labels.
- Add a percentage sign to the PCTN statistic in output.
- Remove some of the lines.
- Add some missing labels.
These changes are easily added to the code. Continue to work with small sample datasets to minimize run time and computer costs. The code and output is listed followed by an explanation of the methods used to format the output:
proc format;
picture btm (round) low - high = '0,000,009' (prefix='$' mult=.01);
picture pct (round) low - < 0 = '0009.99%' (prefix='-', prefix='0' mult=.01)
0 - high = '0009.99%';
proc tabulate data=test noseps;
class offer campaign;
var respond baltran mailed;
keylabel n=' ' sum=' ' mean=' ' pctn=' ' pctsum=' ';
table campaign*(offer all='TOTAL'),
n='Mailed'*f=comma9.
pctn<offer all>='%'*f=pct.
respond='Responders'*f=comma10.
respond='Response Rate'*mean*f=percent9.2
baltran='Balance Transfer per respond'*mean*f=dollar9.0
baltran='Balance Transfer per mailed'*pctsum<mailed>*f=btm.
/rts=19 row=float misstext=' ' box='SAS Global Forum 2007'
<table>
<thead>
<tr>
<th>SAS Global Forum 2007</th>
<th>Mailed</th>
<th>%</th>
<th>Responders</th>
<th>Response Rate</th>
<th>Balance Transfer per respond</th>
<th>Balance Transfer per mailed</th>
</tr>
</thead>
<tbody>
<tr>
<td>campaign offer</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2004/3</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>450,137</td>
<td>45.01%</td>
<td>45,288</td>
<td>10.06%</td>
<td>$6,498</td>
<td>$654</td>
</tr>
<tr>
<td>B</td>
<td>549,863</td>
<td>54.99%</td>
<td>27,748</td>
<td>5.05%</td>
<td>$5,402</td>
<td>$273</td>
</tr>
<tr>
<td>TOTAL</td>
<td>1,000,000</td>
<td>100.00%</td>
<td>73,036</td>
<td>7.30%</td>
<td>$6,082</td>
<td>$444</td>
</tr>
<tr>
<td>2004/4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>449,678</td>
<td>44.97%</td>
<td>45,082</td>
<td>10.03%</td>
<td>$6,504</td>
<td>$652</td>
</tr>
<tr>
<td>B</td>
<td>550,322</td>
<td>55.03%</td>
<td>27,484</td>
<td>4.99%</td>
<td>$5,400</td>
<td>$270</td>
</tr>
<tr>
<td>TOTAL</td>
<td>1,000,000</td>
<td>100.00%</td>
<td>72,566</td>
<td>7.26%</td>
<td>$6,086</td>
<td>$442</td>
</tr>
</tbody>
</table>
The output looks much neater and is ready to be placed in production. The coding tricks used are summarized:
1. Added a PCT PICTURE format to append a '%' to the statistic in line 1. Application of the format takes place in line 5.
2. Line 2 adds a NOSEPS option to PROC TABULATE. This removes solid lined between rows of CLASS splits. If you wish to remove all lines, include a FORMCHAR=' ' OPTION. There are 11 blank spaces between the quotes.
3. Added a KEYLABEL statement in line 3 to blank out all statistics I will use in the TABLE statement.
4. Added some labels after variables names and statistics (4). I did add some padded spaces to the labels and extend widths of formats so that words do not split in the report.
5. **TABLE OPTIONS** in line 3:
- RTS shrinks the space of the first column in the report.
- ROW=FLOAT divides the row title space equally among the nonblank row titles in the crossing.
- MISSTEXT= adds a label if the statistic is missing as opposed to default missing values.
- BOX= adds a title placed in the upper left box of the table. If you have page dimensions in your table, you can specify BOX=_PAGE_ to provide page split values in the box.
**STEP 6 - RUN CODE WITH OBS=MAX**
Run production code on the full data set. Always test TABULATE code on smaller data. This is more efficient and time saving especially for large data.
**STEP 7 – ADD SOME ODS FUNCTIONALITY**
With ODS, one can output the TABULATE report to the WEB, PDF files, RTF files and to EXCEL sheets. We will focus on how to output the report to Excel spreadsheets in this paper. The method to generate the spreadsheet file is with the addition of a few lines of code to the beginning and end of the PROC TABULATE code. Here is the code and the Excel output:
```sas
ODS Listing CLOSE;
ODS html file='c:\SAS_Global_Forum_2007\TAB1.xls';
proc tabulate data=test noseps;
class offer campaign;
var respond baltran mailed;
keylabel n=' ' sum=' ' mean=' ' pctn=' ' pctsum=' ';
table campaign*(offer all='TOTAL')
,
n='Mailed'*f=comma9.
pctn<offer all>=='%'*f=pct.
respond='Responders'*f=comma10.
respond='Response Rate'*mean*f=percent9.2
baltran='Balance Transfer per respond'*mean*f=dollar9.0
baltran='Balance Transfer per mailed'*pctsum<mailed>*f=btm.
/rts=19 row=float misstext=' ' box='SAS Global Forum 2007';
run;
ODS html close;
ODS Listing;
```
<table>
<thead>
<tr>
<th>campaign</th>
<th>offer</th>
<th>Mailed</th>
<th>%</th>
<th>Responders</th>
<th>Response Rate</th>
<th>Balance Transfer per respond</th>
<th>Balance Transfer per mailed</th>
</tr>
</thead>
<tbody>
<tr>
<td>2004/3</td>
<td>A</td>
<td>450,137</td>
<td>45.01%</td>
<td>45,288</td>
<td>10.06%</td>
<td>$6,498</td>
<td>$654</td>
</tr>
<tr>
<td>B</td>
<td>549,863</td>
<td>54.99%</td>
<td>27,748</td>
<td>5.05%</td>
<td>$5,402</td>
<td>$273</td>
<td></td>
</tr>
<tr>
<td>TOTAL</td>
<td></td>
<td>1,000,000</td>
<td>100.00%</td>
<td>73,036</td>
<td>7.30%</td>
<td>$6,082</td>
<td>$444</td>
</tr>
<tr>
<td>2004/4</td>
<td>A</td>
<td>449,678</td>
<td>44.97%</td>
<td>45,082</td>
<td>10.03%</td>
<td>$6,504</td>
<td>$652</td>
</tr>
<tr>
<td>B</td>
<td>550,322</td>
<td>55.03%</td>
<td>27,484</td>
<td>4.99%</td>
<td>$5,400</td>
<td>$273</td>
<td></td>
</tr>
<tr>
<td>TOTAL</td>
<td></td>
<td>1,000,000</td>
<td>100.00%</td>
<td>72,566</td>
<td>7.26%</td>
<td>$6,086</td>
<td>$442</td>
</tr>
</tbody>
</table>
With some versions of SAS you may get some shading of cells. To minimize on the shading, change the ODS HTML statement as follows:
```sas
ods html file='c:\SAS_Global_Forum_2007\TAB1.xls' style=minimal;
```
Want to eliminate all lines in the report? Look at SAS support document:
http://support.sas.com/sassamples/quicktips/04feb/ods-excel.html
The text will tell you how to create a modified style called NOBORDER that you can use in the STYLE option. You can also try the new ODS TAGSETS to output Multi-Sheet Excel Workbooks (DelGobbo, V. 2007).
**YOUR BOSS WANTS CHANGES!**
Your boss wants to put campaign year as a column, to add descriptive labels to offer A and B, include total dollars coming in for each offer and the percent of dollars coming in for each offer by campaign. The modified code and output is shown:
```sas
proc format;
value $offer 'A' = 'A: Really good Offer'
'B' = 'B: Good Offer'
;
ods Listing CLOSE;
ods HTML file='TAB2.xls' style=minimal;
proc tabulate data=test noseps;
class offer campaign;
format offer $offer.;
var respond baltran mailed;
keylabel n=' ' sum=' ' mean=' ' pctn=' ' pctsum=' ';
table (offer all='TOTAL' )
*
(n='Mailed' *f=comma9.
pctn<offer all>=' % ' *f=pct.
respond='Responders' *f=comma10.
respond='Response Rate' *mean*f=percent9.2
baltran='Balance Transfer per respond' *mean*f=dollar9.0
baltran='Balance Transfer per mailed' *pctsum<mailed>*f=btm.
baltran='Total Transfer'*f=dollar13.
baltran='% Total Transfer'*colpctsum=' '*f=pct.
)
,
campaign
/rts=19 row=float misstext=' ' box='SAS Global Forum 2007' ;
run;
ods HTML close;
```
'
<table>
<thead>
<tr>
<th>SAS Global Forum 2007 campaign</th>
<th>2004/3</th>
<th>2004/4</th>
</tr>
</thead>
<tbody>
<tr>
<td>offer</td>
<td></td>
<td></td>
</tr>
<tr>
<td>A: Really good Offer</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Mailed</td>
<td></td>
<td></td>
</tr>
<tr>
<td>%</td>
<td>45.01%</td>
<td>44.97%</td>
</tr>
<tr>
<td>Responders</td>
<td>45,288</td>
<td>45,082</td>
</tr>
<tr>
<td>Response Rate</td>
<td>10.06%</td>
<td>10.03%</td>
</tr>
<tr>
<td>Balance Transfer per respond</td>
<td>$6,498</td>
<td>$6,504</td>
</tr>
<tr>
<td>Balance Transfer per mailed</td>
<td>$654</td>
<td>$652</td>
</tr>
<tr>
<td>Total Transfer</td>
<td>$294,285,379</td>
<td>$293,204,204</td>
</tr>
<tr>
<td>% Total Transfer</td>
<td>66.26%</td>
<td>66.39%</td>
</tr>
<tr>
<td>B: Good Offer</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Mailed</td>
<td></td>
<td></td>
</tr>
<tr>
<td>%</td>
<td>54.99%</td>
<td>55.03%</td>
</tr>
<tr>
<td>Responders</td>
<td>27,748</td>
<td>27,484</td>
</tr>
<tr>
<td>Response Rate</td>
<td>5.05%</td>
<td>4.99%</td>
</tr>
<tr>
<td>Balance Transfer per respond</td>
<td>$5,402</td>
<td>$5,400</td>
</tr>
<tr>
<td>Balance Transfer per mailed</td>
<td>$273</td>
<td>$270</td>
</tr>
<tr>
<td>Total Transfer</td>
<td>$149,885,279</td>
<td>$148,408,109</td>
</tr>
<tr>
<td>% Total Transfer</td>
<td>33.74%</td>
<td>33.61%</td>
</tr>
<tr>
<td>TOTAL</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Mailed</td>
<td>1,000,000</td>
<td>1,000,000</td>
</tr>
<tr>
<td>%</td>
<td>100.00%</td>
<td>100.00%</td>
</tr>
<tr>
<td>Responders</td>
<td>73,036</td>
<td>72,566</td>
</tr>
<tr>
<td>Response Rate</td>
<td>7.30%</td>
<td>7.26%</td>
</tr>
<tr>
<td>Balance Transfer per respond</td>
<td>$6,082</td>
<td>$6,086</td>
</tr>
<tr>
<td>Balance Transfer per mailed</td>
<td>$444</td>
<td>$442</td>
</tr>
<tr>
<td>Total Transfer</td>
<td>$444,170,659</td>
<td>$441,612,313</td>
</tr>
<tr>
<td>% Total Transfer</td>
<td>100.00%</td>
<td>100.00%</td>
</tr>
</tbody>
</table>
**STEP 8 – NEED TO GENERATE MULTI-LABEL FORMATS?**
SAS8 introduced the capability of generating multi-label formats. These formats can be used in PROC SUMMARY (MEANS) and TABULATE. An example is taken from the SAS Press book, "The Power of PROC FORMAT" (Bilenas, J., 2005). In this example we look at credit card application decision results. There are 3 approval codes and 2 decline codes. The goal is to provide counts for each decision code and then have totals for approvals and declines. Code is shown next.
```
proc format;
picture p8r (round)
low - < 0 = '0009.99%' (prefix='-')
0 - high = '0009.99%';
value $deccode (multilabel notsorted)
'a0' - 'a9' = 'APPROVE TOTALS'
'a1' = 'a1: Approval'
'a2' = 'a2: Weak Approval'
'a4' = 'a4: Approved Alternate Product'
'd0' - 'd9' = 'DECLINE TOTALS'
'd1' = 'd1: Decline for Credit'
'd6' = 'd6: Decline Other';
proc tabulate data=decision noseps formchar='...
```
1. `value $deccode (multilabel notsorted)`
2. `picture p8r (round)`
```
class decision/mlf preloadfmt order=data;
format decision $deccode.;
table (decision all)
,
n*f=comma5.
pctn='%'*f=p8r.
/rts=33 row=float misstext=' ';
run;
```
Some notes about the above code:
In line 1 we have the MULTILABEL option to indicate a multilabel format. We also include the NOTSORTED option which stores values in the order specified.
I ran the code on a PC so I added an ALT-255 ASCII code in line 2 to generate a space and retain a space after the first quote.
In the CLASS statement (line 3) we add the MLF option to indicate a multilabel format. We also add the PRELOADFMT option to load the format in the value orders specified by the format. We also include the ORDER=DATA option to maintain the order of the format values and labels. You may have to list other CLASS variables with a separate CLASS statement if they are not generated with multilabel formats.
Output is shown here:
<table>
<thead>
<tr>
<th>Decision</th>
<th>N</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>APPROVE TOTALS</td>
<td>314</td>
<td>31.40%</td>
</tr>
<tr>
<td>a1: Approval</td>
<td>163</td>
<td>16.30%</td>
</tr>
<tr>
<td>a2: Weak Approval</td>
<td>45</td>
<td>4.50%</td>
</tr>
<tr>
<td>a4: Approved Alternate Product</td>
<td>106</td>
<td>10.60%</td>
</tr>
<tr>
<td>DECLINE TOTALS</td>
<td>686</td>
<td>68.60%</td>
</tr>
<tr>
<td>d1: Decline for Credit</td>
<td>453</td>
<td>45.30%</td>
</tr>
<tr>
<td>d6: Decline Other</td>
<td>233</td>
<td>23.30%</td>
</tr>
<tr>
<td>All</td>
<td>1,000</td>
<td>100.00%</td>
</tr>
</tbody>
</table>
**THE “MISSING CLASS LEVELS” PROBLEM**
While presenting at another User Group Conference in 2006 I was given a problem on how to include missing levels in a CLASS variable in a TABULATE report so that the report looks consistent from month to month. The solution involves the use of a CLASSDATA that includes all data for all levels of a CLASS variable. The actual application was for reporting frequencies of fish length for fish sampled in a particular stream. Let's see how the solution was developed.
First, some simulated data with missing intervals:
```
data test;
do i = 1 to 1000;
length = int(100*ranuni(23));
if length < 40 or length > 50 then output;
end;
run;
```
Next we will add the FORMAT for the interval definition and our usual PICTURE FORMAT to add the % sign to PCTSUM and PCTN statistics:
```sas
proc format;
value length low - < 20 = ' 0 - LT 20'
20 - < 40 = ' 20 - LT 40'
40 - < 50 = ' 40 - LT 50'
50 - < 75 = ' 50 - LT 75'
75 - high = ' 75+'
;
picture pct (round) low - high = '0009.99%' ; run;
```
Let us now create the CLASSDATA. This data set needs at least one record for each level listed in the user generated FORMAT above. Only the CLASS variable is required for the data.
```sas
data classes;
do length = 0,20,40,50,75;
output;
end;
run;
```
The TABULATE code follows with the CLASDATA= OPTION:
```sas
proc tabulate data=test noseps formchar=' ' missing
classdata=classes;
class length;
format length length.;
table length all ,
n*f=comma6.
pctn<length all>='%'*f=pct. ;
/rts=20 row=float misstext=' ' ; run;
```
And now we see the output with the missing interval included:
<table>
<thead>
<tr>
<th></th>
<th>N</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>length</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0 - LT 20</td>
<td>201</td>
<td>22.41%</td>
</tr>
<tr>
<td>20 - LT 40</td>
<td>208</td>
<td>23.19%</td>
</tr>
<tr>
<td>40 - LT 50</td>
<td>0</td>
<td>0.00%</td>
</tr>
<tr>
<td>50 - LT 75</td>
<td>211</td>
<td>23.52%</td>
</tr>
<tr>
<td>75+</td>
<td>277</td>
<td>30.88%</td>
</tr>
<tr>
<td>All</td>
<td>897</td>
<td>100.00%</td>
</tr>
</tbody>
</table>
We can handle the missing N under the “40 – LT 50” group by changing the MISSTEXT option to ‘0’.
STEP 9 - SIT BACK, SMILE AND BE PROUD OF YOUR REPORT
Once your TABULATE report is generated, feel confident that you have begun your trip to understanding the mystery of PROC TABULATE. Your trip will be a long one, but it will be a full of enlightenment as you become a TABULATE Master.
CONCLUSION
The TABULATE procedure is a complex PROC that generates tabular reports. There is no single correct way to generate reports in TABULATE. This paper provided a framework that will help you understand the workings of TABULATE and will provide a starting framework on how to generate table reports using PROC TABULATE.
Often reports can be generated without additional DATA steps. The use of PROC FORMAT can assist in labeling output. DATA steps may be required to generate additional variables for TABULATE output and/or modify variables to generate statistics required in your output.
REFERENCES:
DelGobbo, V. "Creating AND Importing Multi-Sheet Excel Workbooks the Easy Way with SAS®", SUGI 31
CONTACT INFORMATION
Your comments and questions are valued and encouraged. Contact the author at:
Jonas V. Bilenas
JP Morgan Chase Bank
Wilmington, DE 19801
302-282-2462
Email: Jonas.Bilenas@chase.com
jonas@jonasbilenas.com
Web: http://www.jonasbilenas.com
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies.
This work is an independent effort and does not necessarily represent the practices followed at JP Morgan Chase Bank.
|
{"Source-Url": "https://support.sas.com/resources/papers/proceedings/proceedings/forum2007/230-2007.pdf", "len_cl100k_base": 9601, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 37860, "total-output-tokens": 9771, "length": "2e13", "weborganizer": {"__label__adult": 0.0002646446228027344, "__label__art_design": 0.0006389617919921875, "__label__crime_law": 0.0002696514129638672, "__label__education_jobs": 0.001873016357421875, "__label__entertainment": 9.822845458984376e-05, "__label__fashion_beauty": 0.00014638900756835938, "__label__finance_business": 0.0028285980224609375, "__label__food_dining": 0.00035500526428222656, "__label__games": 0.0005693435668945312, "__label__hardware": 0.0006327629089355469, "__label__health": 0.00028061866760253906, "__label__history": 0.0002046823501586914, "__label__home_hobbies": 0.0001760721206665039, "__label__industrial": 0.000644683837890625, "__label__literature": 0.00018668174743652344, "__label__politics": 0.00017273426055908203, "__label__religion": 0.0002579689025878906, "__label__science_tech": 0.0183868408203125, "__label__social_life": 0.00013267993927001953, "__label__software": 0.161376953125, "__label__software_dev": 0.81005859375, "__label__sports_fitness": 0.0001748800277709961, "__label__transportation": 0.0003097057342529297, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31340, 0.05512]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31340, 0.21335]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31340, 0.80372]], "google_gemma-3-12b-it_contains_pii": [[0, 3147, false], [3147, 5085, null], [5085, 8566, null], [8566, 10475, null], [10475, 11626, null], [11626, 12727, null], [12727, 13430, null], [13430, 14888, null], [14888, 16665, null], [16665, 19522, null], [19522, 21995, null], [21995, 23670, null], [23670, 26322, null], [26322, 28319, null], [28319, 29649, null], [29649, 31340, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3147, true], [3147, 5085, null], [5085, 8566, null], [8566, 10475, null], [10475, 11626, null], [11626, 12727, null], [12727, 13430, null], [13430, 14888, null], [14888, 16665, null], [16665, 19522, null], [19522, 21995, null], [21995, 23670, null], [23670, 26322, null], [26322, 28319, null], [28319, 29649, null], [29649, 31340, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31340, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31340, null]], "pdf_page_numbers": [[0, 3147, 1], [3147, 5085, 2], [5085, 8566, 3], [8566, 10475, 4], [10475, 11626, 5], [11626, 12727, 6], [12727, 13430, 7], [13430, 14888, 8], [14888, 16665, 9], [16665, 19522, 10], [19522, 21995, 11], [21995, 23670, 12], [23670, 26322, 13], [26322, 28319, 14], [28319, 29649, 15], [29649, 31340, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31340, 0.21573]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
46c89f25e2046539073ee278d5ee72e297575f48
|
The Meta-Problem for Conservative Mal’tsev Constraints
Clément Carbonnel*
LAAS-CNRS
University of Toulouse, INP Toulouse, France
carbonnel@laas.fr
Abstract
In the algebraic approach to CSP (Constraint Satisfaction Problem), the complexity of constraint languages is studied using closure operations called polymorphisms. Many of these operations are known to induce tractability of any language they preserve. We focus on the meta-problem: given a language Γ, decide if Γ has a polymorphism with nice properties. We design an algorithm that decides in polynomial-time if a constraint language has a conservative Mal’tsev polymorphism, and outputs one if one exists. As a corollary we obtain that the class of conservative Mal’tsev constraints is uniformly tractable, and we conjecture that this result remains true in the non-conservative case.
1 Introduction
The complexity of constraint satisfaction problems is a very active and fruitful research area. In particular, the study of CSP over fixed constraint languages has attracted considerable interest since it was conjectured that for every finite constraint language Γ, CSP(Γ) is either in P or NP-hard (the Feder-Vardi Dichotomy Conjecture) (Feder and Vardi 1998). The most remarkable achievements to date include a characterization of languages that can be solved by local consistency methods (Barto and Kozik 2014) or Gaussian-like algorithms (Idziak et al. 2007), and a proof of the Dichotomy Conjecture for conservative languages (languages with all possible unary relations over the domain) (Bulatov 2003). These results use the algebraic approach to CSP: every language Γ can be associated with a set of closure operations, called polymorphisms, which have been shown to entirely determine the complexity of CSP(Γ) (Jeavons, Cohen, and Gyssens 1997).
Given an operation \( f : D^k \to D \), a language Γ over the domain \( D \) admits \( f \) as a polymorphism if every constraint relation \( R \in \Gamma \) is closed under componentwise application of \( f \). For example, the affine relation \( x + y + z = c \) is closed under the polymorphism \( f(x_1, x_2, x_3) = x_1 - x_2 + x_3 \), since \( x_1 + y_1 + z_1 = c \), \( x_2 + y_2 + z_2 = c \), \( x_3 + y_3 + z_3 = c \) imply that \( f(x_1, x_2, x_3) + f(y_1, y_2, y_3) + f(x_3, y_3, z_3) = (x_1 - x_2 + x_3) + (y_1 - y_2 + y_3) + (z_1 - z_2 + z_3) = c \). A number of sufficient conditions for tractability have been identified this way; for instance, CSP(Γ) is solved by enforcing generalized arc-consistency (GAC) if Γ has a semilattice polymorphism (Jeavons, Cohen, and Gyssens 1997). Each sufficient condition defines a tractable class, that is, a set \( T \) of languages such that \( \forall \Gamma \in T, \text{ CSP}(\Gamma) \) is in P.
There are some desirable properties that good tractable classes can be expected to have. First, we know that there exists a polynomial-time algorithm for each fixed Γ ∈ T, but there is no guarantee that there exists one polynomial-time algorithm that solves every CSP(Γ), Γ ∈ T. This can be formalized as a promise problem: if CSP(T) is CSP together with the promise that the instance is over a language in T, is it true that CSP(T) ∈ P? If the answer is yes, we say that T is uniformly tractable (or equivalently that T uniformizes (Kolaitis and Vardi 2000)).
We shall illustrate this notion with an example. Consider the tractable class \( T_c \) of all languages Γ such that CSP(Γ) can be solved by enforcing strong \( k \)-consistency, where \( k \) only depends on Γ. Since there is no bound on \( k \) in the definition of \( T_c \), it is not clear that \( T_c \) is uniformly tractable. However, a powerful result by Bulatov implies that enforcing a form of consistency called \((2, 3)\)-minimality suffices to solve CSP(Γ) for each Γ ∈ \( T_c \) (Bulatov 2010). Enforcing \((2, 3)\)-minimality is polynomial-time, so \( T_c \) is uniformly tractable.
Even if the class is uniformly tractable, one problem remains: how hard is it to decide if a given language Γ is in T? This is the meta-problem for T. In its full generality, the meta-problem has no restriction on the input language. In particular, the domain size is not assumed to be bounded. In the worst case the meta-problem is not necessarily decidable, but in practice it is often in NP. If the class is defined by the existence of polymorphisms satisfying a certain set of identities (which is usually the case), the meta-problem is a polymorphism detection problem. For instance, the class of languages that admit a semilattice polymorphism is uniformly tractable since it is solved by GAC, but the meta-problem is NP-complete (Green and Cohen 2008).
Beyond pure academic interest, the main reason for investigating the complexity of meta-problems concerns
*Supported by ANR Project ANR-10-BLAN-0210.
general-purpose solvers. It is great to know that languages with a nice polymorphism can be solved efficiently, but this information is virtually useless for practical constraint solvers if they cannot decide quickly if the language of the instance they are trying to solve has the desired polymorphism. Furthermore, it was observed that constraint solvers may perform poorly even on instances that are theoretically easy (Petke and Jeavons 2009), which suggests that spending some time analyzing the instance before starting search could be beneficial. Beyond preprocessing uses, a very efficient detection algorithm could be exploited in the framework of backdoors, which aims to provide performance improvements even if only a fraction of the constraints have a nice polymorphism (Williams, Gomes, and Selman 2003). In this setting, conservative polymorphisms are of special interest (Bessiere et al. 2013). In this setting, conservative polymorphisms are of special interest (Bessiere et al. 2013).
Sometimes, the complexity of the meta-problem is strongly related to the uniform tractability question. This is true for the tractable class $T_{Mal}$ of all languages that admit a Mal’tsev polymorphism, which include as particular cases the languages whose relations are linear equations over a field (Bulatov 2002). The solution algorithm resembles Gaussian elimination, in that it starts from an instance without any constraint and then adds the constraints one by one while maintaining at all times a polynomial-sized representation of the solution set (Bulatov and Dalmau 2006)(Dyer and Richerby 2013). This algorithm remains polynomial time even if the domain size or the number of tuples are not fixed, but it does not entail uniform tractability because it assumes that the Mal’tsev polymorphism is known. Since there are roughly $d^d$ possible Mal’tsev operations over a domain of size $d$ and it is possible that only one of them is a polymorphism of the language, an exhaustive approach is not satisfying. However, should a polynomial-time algorithm that outputs a Mal’tsev polymorphism if one exists be engineered, we could interface it with the state-of-the-art solution algorithm and prove uniform tractability of $T_{Mal}$. But then, this polymorphism detection algorithm would also prove that the meta-problem is in P.
This paper builds around the observation that $T_{Mal}$ is likely to have an easy meta-problem and be uniformly tractable. Although we cannot prove this claim in its full generality, we present a proof for the restricted case of conservative Mal’tsev polymorphisms. This extends previous results showing that conservative Mal’tsev polymorphisms can be detected in polynomial time in digraphs (Carvalho et al. 2011) and binary relational structures (Bessiere et al. 2013). As a byproduct, we obtain a greatly improved algorithm for detecting conservative majority polymorphisms, which generalise 2SAT and connected row-convex constraints.
Besides being a first step towards proving the uniform tractability of Mal’tsev constraints, our result for the conservative case is interesting in its own right. The tractable class of languages having a conservative Mal’tsev polymorphism has seen little practical use, but is of great theoretical importance. For instance, conservative Mal’tsev polymorphisms are one of the main ingredients in Libor Barto’s proof of the conservative Dichotomy Conjecture (Barto 2011). Moreover, the existence of a conservative Mal’tsev polymorphism is a necessary condition for the tractability of CCSP($\Gamma$), a variant of CSP($\Gamma$) in which global cardinality constraints are allowed in addition to the relations of $\Gamma$ (Bulatov and Marx 2010). Examples of conservative Mal’tsev operations include extreme value functions, which map any triplet \{x, y, z\} of natural numbers to $\alpha \in \{x, y, z\}$ such that $|\alpha - \text{median}(x, y, z)|$ is maximum.
2 Preliminaries
CSP. A Constraint Satisfaction Problem (CSP) is a triple $(\mathcal{X}, \mathcal{D}, \mathcal{C})$ where $\mathcal{X}$ is a set of variables, $\mathcal{D}$ is a finite set of values, and $\mathcal{C}$ is a set of constraints. A constraint $C$ of arity $r$ is a pair $(S(C), R(C))$ where $S(C) \subseteq X^r$ is the scope of $C$ and $R(C) \subseteq D^r$ is the relation of $C$. Note that $R(.)$ and $S(.)$ can be seen as operators that return the relation and scope of a constraint. A solution of $I$ is an assignment $\phi : \mathcal{X} \rightarrow \mathcal{D}$ such that $\forall C \in \mathcal{C}, \phi(S(C)) \subseteq R(C)$, and the goal is to decide if $I$ has a solution. A constraint language is a set of relations, and the language of a CSP instance $I$ is the set $\Gamma_I = \{R(C) \mid C \in \mathcal{C}\}$. Given a fixed constraint language $\Gamma$, CSP($\Gamma$) is the set of all instances $I$ such that $I \subseteq \Gamma$. We assume that all relations are given in extension (i.e. as lists of tuples).
Polymorphisms. An operation $f : \mathcal{D} \rightarrow \mathcal{D}^k$ is a polymorphism of a language $\Gamma$ over $\mathcal{D}$ if for all $R \in \mathcal{D}$ of arity $r$ and $t_1, \ldots, t_k \in \mathcal{D}$,
\[
|f(t_1[r], \ldots, t_k[r])| \geq R
\]
The set of all polymorphisms of $\Gamma$ is denoted by Pol($\Gamma$) and constitutes an operational clone, that is, a set of operations closed under composition that contains all projections (Jeavons, Cohen, and Gyssens 1997). It has been shown that the complexity of CSP($\Gamma$) is entirely determined by Pol($\Gamma$) (Jeavons, Cohen, and Gyssens 1997). An operation $f : \mathcal{D} \rightarrow \mathcal{D}^k$ is conservative if $f(x_1, \ldots, x_k) \subseteq \{x_1, \ldots, x_k\}$ for all $x_1, \ldots, x_k \in \mathcal{D}$, Mal’tsev if it is ternary and $\forall x, y \in \mathcal{D}$, $f(x, x, y) = f(y, y, x) = y$, and majority if it is ternary and $\forall x, y \in \mathcal{D}$, $f(x, x, y) = f(y, x, x) = f(y, y, x) = x$.
Tools. The most useful tool used to design polymorphism detection algorithms is the indicator problem. Formally, given an integer $k$ and a finite constraint language $\Gamma$, the indicator problem of order $k$ of $\Gamma$ is a CSP instance $IP_k(\Gamma)$ with one variable $x_{i_1,\ldots,i_k}$ for every $k$-tuple $(i_1,\ldots,i_k)$ of elements from $\mathcal{D}$. Then, for each $R \in \Gamma$ of arity $r$ and $t_1, \ldots, t_k \in \mathcal{D}$, $IP_k(\Gamma)$ contains a constraint $C^R_{t_1,\ldots,t_k}$ with scope \{x_{i_1,\ldots,i_k},\ldots,x_{t_1[r],\ldots,t_k[r]}\} and relation $R$. Going back to the definition of a polymorphism, it follows that an operation $f$ of arity $k$ is a polymorphism of $\Gamma$ if and only if $x_{i_1,\ldots,i_k} \leftarrow f(v_1,\ldots,v_k)$ is a solution of $IP_k(\Gamma)$.
If we are only looking for polymorphisms with special properties, sometimes the solution set of $IP_k(\Gamma)$ can be restricted to exactly those polymorphisms. For in-
We note that this observation also applies to the larger tractable class of languages having a $k$-edge polymorphism (for a fixed $k$) by using the algorithm of (Ildizi et al. 2007) for membership in $\text{coNP}$. However, for the sake of simplicity we shall focus on the case $k = 2$, which corresponds to Mal’tsev polymorphisms.
4 Conservative Mal’tsev constraints
In this section, we show that the existence of a conservative Mal’tsev polymorphism can be decided in polynomial time. The outline of the proof is as follows. We first reduce the problem to that of finding a conservative minority polymorphism (i.e. a ternary polymorphism $m$ such that $\forall x, y, m(x, y, y) = m(x, y, x) = m(y, x, x) = y$). Then, we show that enforcing arc-consistency on the indicator problem associated with conservative minority polymorphisms leaves an extremely well-structured instance, and a simple reduction rule allows us to eliminate every variable whose domain contains more than two values. The residual instance is then shown to be equivalent to a system of linear equations over $GF(2)$, and can be solved by Gaussian elimination.
Lemma 1. Let $F$ be an operational clone. $F$ contains a conservative Mal’tsev operation if and only if it contains a conservative minority operation.
Proof. Every minority operation is a Mal’tsev operation, hence one implication is trivial. Suppose that $F$ contains a conservative Mal’tsev operation $m$, and let $f(x, y, z) = m(z, m(y, m(x, z, y), x), m(x, z, y))$
This operation belongs to $F$ because $F$ is a clone, and is conservative since $m$ is conservative. Furthermore, for every $a, b$ we have:
$$f(a, b, a) = m(a, m(b, m(a, a, b), a), m(a, a, b)) = b$$
$$f(b, a, a) = m(a, m(a, m(b, a, a), b), m(b, a, a)) = b$$
and it is fairly straightforward to see that $f(a, b, a) = m(b, m(a, m(a, a, b), a), m(a, a, b))$ is always equal to $b$, whether $m(a, a, b) = b$ or $m(a, a, b) = a$. Hence, $f$ is a minority operation of $F$.
Although this lemma may be known to some, it appears to have never been pointed out in the literature. The closest results we could find were that digraphs with a conservative Mal’tsev polymorphism also have a conservative minority polymorphism (Carvalho et al. 2011) and constraint languages with both a conservative majority and a conservative Mal’tsev polymorphism also have a conservative minority polymorphism (Bulatov and Marx 2010). In our case, this lemma is crucial, since the indicator problem corresponding to conservative minority polymorphisms has interesting (i.e., algorithmically exploitable) properties that its counterpart for Mal’tsev polymorphisms does not have.
Given a language $\Gamma$, we denote by $IP_{\text{min}}(\Gamma)$ the indicator problem of order 3 of $\Gamma$ with the additional constraints $x_{v_1, v_1, v_2} \in \{v_2\}$, $x_{v_1, v_2, v_1} \in \{v_2\}$, $x_{v_2, v_1, v_3} \in \{v_3\}$.
3 First observation
Recall that the existence of a uniform algorithm for Mal’tsev constraints is equivalent to the tractability of the problem $\text{CSP}(T_{\text{Mal}})$, where we only have the promise that the language of the instance has a Mal’tsev polymorphism. The complexity of this problem is open, but it is easy to see that the following is true.
Observation 1. $\text{CSP}(T_{\text{Mal}}) \in \text{NP} \cap \text{coNP}$.
Proof. Membership in $\text{NP}$ follows from that of the general $\text{CSP}$. For membership in $\text{coNP}$, a Mal’tsev polymorphism $f$ of the constraint language is a certificate: with the knowledge of $f$, the algorithm from (Dyer and Richerby 2013) provides a way to check in polynomial time that the instance has no solution.
Unless $\text{NP} = \text{coNP}$, this observation rules out the possibility that this problem is $\text{NP}$-hard (Goldreich 2010). Besides, examples of $\text{NP} \cap \text{coNP}$ problems that are not believed to be in $\text{P}$ are quite rare, so we regard this observation as evidence that Mal’tsev constraints may have a uniform algorithm. Using the same kind of reasoning as for majority polymorphisms, it would follow that Mal’tsev polymorphisms can be detected in polynomial time. The next section will provide additional evidence by proving that conservative Mal’tsev constraints are uniformly tractable.
Proof. \(x_1, x_2 \in D\) and \(x_{v_1,v_2,v_3} \in \{v_1,v_2,v_3\}\) for every \(v_1, v_2, v_3 \in D\). By construction, the solutions of \(IP^{emin}(\Gamma)\) are exactly the conservative minority polymorphisms of \(\Gamma\). Given a constraint \(C = (S, R)\) and \(S' \subseteq S\), we denote by \(C[S']\) the projection of \(C\) onto \(S'\).
For our structural analysis we will assume that for every \(R' \in \Gamma\), \(IP^{emin}(\Gamma)\) also contains a constraint \(C'_{R',t_1,t_2,t_3}\) for every projection \(R'\) of \(R\) and \(t'_1, t'_2, t'_3 \in R'\). These additional constraints are only needed to facilitate our analysis and will not be required by the algorithm.
In a generalized arc-consistent instance, the domain \(D(x)\) of a variable \(x\) is the set of values for \(x\) that have supports in every constraint whose scope contains \(x\). For the remainder of the paper, we will assume that GAC has been enforced on \(IP^{emin}(\Gamma)\). The following observation describes an important but very general property that will be used repeatedly in our proofs.
**Observation 2.** If \(C_{R',t_1,t_2,t_3} = (R, S)\) is a constraint in \(IP^{emin}(\Gamma)\) and \(t, t', t'' \in R\), then \(R(C_{R',t_1,t_2,t_3}, t_1,t_2,t_3) \subseteq R\).
Proof. Let \(C_{R',t_1,t_2,t_3} = (R', S')\), and \(|S| = |S'| = r\). Before GAC was enforced, both \(C_{R_1,t_1,t_2,t_3}\) and \(C_{R_2,t_1,t_2,t_3}\) had \(R^*\) as relation. Thus, by definition of generalized arc-consistency, we have \(R = R^* \cap \{\pi \in S \mid \pi \in D(x)\}\) and \(R' = R^* \cap \{\pi \in S \mid \pi \in D(x)\}\). However, since \(t, t', t'' \in R\), the conservativity constraints ensure that for each \(i = 1, \ldots, n\), \(D(S'[i]) \subseteq D(S[i])\). Therefore, \(R' \subseteq R\).
Throughout the paper we will treat elements of a scope \(S\) as occurrences of variables, and not simply variables. For example, given \(x \in S\), the restricted scope \(S\setminus x\) removes the occurrence \(x\) from \(S\), but not every occurrence of the variable represented by \(x\). A constraint \(C = (S, R)\) is functional in \(x \in S\) if for every valid assignment \(t\) of \(S\setminus x\) there is at most one value \(d \in D\) such that \((S\setminus x) \leftarrow t, x \leftarrow d\) is an assignment to \(S\) that satisfies \(C\). Finally, if two relations \(R\) and \(R'\) differ only by a permutation of their columns, we write \(R \approx R'\). The proof of the next lemma gives a simple example of the use we will make of Observation 2.
We remind the reader that if \(C = C_{R',t_1,t_2,t_3}\) is a constraint of \(IP^{emin}(\Gamma)\), the \(k\)th variable in its scope is \(x_{t_1[k],t_2[k],t_3[k]}\). Therefore, if \(t_1[k] = t_2[k]\), the unary constraints will ensure that \(x_{t_1[k],t_2[k],t_3[k]}\) is ground (i.e. has a singleton domain) with value \(t_3[k]\).
**Lemma 2.** Let \(C = (R, S)\) be a constraint in \(IP^{emin}(\Gamma)\), and let \(x \in S\). Either \(C\) is functional in \(x\), or \(R \approx R(C[S\setminus x]) \times D(x)\).
Proof. Let \(C = C_{R',t_1,t_2,t_3}\) and \(x = x_{v_1,v_2,v_3}\). Without loss of generality, we assume that \(x\) occurs last in \(S\). First, suppose that there exists \(t \in R(C[S\setminus x])\) such that \((t, v_k) \in R(C)\) for every \(v_k \in D(x)\). We will show that every tuple has the same property as \(t\). Let \(t' \in R(C[S\setminus x])\) be such that \((t', v_k) \in R(C)\) for every \(v_k \in D(x)\). Then, because of the unary constraints, the constraint \(C_{R'(v_k),t_1,t_2,t_3}\) has only ground variables in its scope, and its only possible support is \((t', v_3)\). By Observation 2, \(R(C_{R'(v_k),t_1,t_2,t_3}) \subseteq R(C)\) and hence \((t', v_3) \in R(C)\), a contradiction. Therefore, such a partial tuple \(t'\) cannot exist and \(R \approx R(C[S\setminus x]) \times D(x)\).
Now, suppose that \(D(x) = \{v_1, v_2, v_3\}\) and there exists \(t \in R(C[S\setminus x])\) such that \((t, v_k) \in R(C)\) for exactly two indices \(k\), say 1 and 2. Since \(C\) is arc-consistent, there exists \(t'\) such that \((t', v_1) \in R(C)\). However, the scope of constraint \(C_{R'(v_1),t_1,t_2,t_3} = (R, S)\) contains only ground variables and \(x\); therefore \(R(C_{R'(v_1),t_1,t_2,t_3})\) contains the tuple \((t', v_k)\) for all \(k \in \{1, 2, 3\}\). By Observation 2 we have \(R(C_{R'(v_1),t_1,t_2,t_3}) \subseteq R(C)\), and the partial tuple \(t'\) brings us back to the first case.
If no tuples satisfy either of the above two conditions, \(C\) is functional in \(x\).
The key observation in our proof will be that variables with domain size 1 or 2 have very limited interactions with variables with domain size 3 once arc-consistency has been enforced. Given a constraint \(C\) in \(IP^{emin}(\Gamma)\), we denote by \(S_{1,2}(C)\) the restriction of \(S\) to variables with domain size 1 or 2, and by \(S_3(C)\) the restriction of \(S\) to variables with domain size 3.
**Lemma 3.** Let \(C\) be a constraint in \(IP^{emin}(\Gamma)\) and \(x \in S_{1,2}(C)\).
\(R(C[S_{1,2}(C) \cup x]) \approx R(C[S_{1,2}(C)]) \times D(x)\).
Proof. Let \(C_1 = C[S_{1,2}(C)] = (R_1, S_1)\), \(C_2 = C[S_{1,2}(C) \cup x] = (R_2, S_2)\) and assume that \(x = x_{v_1,v_2,v_3}\) occurs last in the scope of \(C_2\). By Lemma 2, either \(R_2 = R_1 \times D(x)\) or \(C_2\) is functional in \(x\). If it is functional, then by GAC there exist \(t, t', t'' \in R_1\) such that \(R_2\) contains \((t, v_1), (t', v_2)\) and \((t'', v_3)\). Then, the scope of \(C' = C_{R'(v_1),t_1,t_2,t_3} = (R, S)\) has only ground variables (those corresponding to \(S_{1,2}(C)\)) plus \(x_{v_1,v_2,v_3}\). Therefore, there exists \(t\) such that \(R(C')\) contains \((t', v_1),\)
\((t', v_2)\) and \((t', v_3)\). By Observation 2, \(R(C') \subseteq R_2\) and \(C_2\) is not functional in \(x\), a contradiction.
Lemma 3 only deals with constraints whose scope contains exactly one variable with domain size 3. Unfortunately, for \(k\) variables it is not completely true that \(R(C[S_{1,2}(C) \cup \{x_1, \ldots, x_k\}]) \approx R(C[S_{1,2}(C)]) \times D(x_1) \times \ldots \times D(x_k)\). Let \(x_{v_1,v_2,v_3} = (v_1^i, v_2^i, v_3^i)\). Let \(C_{R'(v_1,v_2,v_3)} = (R, S)\) be \(k\) variables of the indicator problem. The index-equality constraint between these variables has three satisfying assignments: \(\{v_1^1, v_2^1, v_3^1\}, \{v_1^2, v_2^2, v_3^2\}\) and \(\{v_1^3, v_2^3, v_3^3\}\). The next Proposition is the keystone of our proof, and gives the correct generalization of Lemma 3 to an arbitrary number of variables with domain size 3.
**Proposition 1.** Let \(C\) be a constraint in \(IP^{emin}(\Gamma)\). There exists \(n \geq 0\) and a set of constraints \(C^*, C_{1}, \ldots, C_{n}\) such that
\[C = C^* \land \left( \bigwedge_{i=1}^{n}(C_i) \right)\]
where the scope of $C^*$ is $S(C)$, the constraints $C_i$ are (possibly unary) index-equalities whose scope are dis-joint and cover $S_{\partial}(C)$, and
$$\mathcal{R}(C^*) \approx \mathcal{R}(C[S_{\lfloor 1,2 \rfloor}(C)]) \times \Pi_{x \in S_{\partial}(C)} D(x)$$
Proof. We proceed by induction on the size of $S_{\partial}(C)$. Let $k > 0$ and suppose that Proposition 1 is true for all constraints $C'$ such that $|S_{\partial}(C')| \leq k$. Let $C = C_{t_1,t_2,t_3} = (S,R)$ be a constraint with $|S_{\partial}(C)| = k + 1$, and $x \in S_{\partial}(C)$. By Lemma 2, either $C$ is functional in $x$ or $\mathcal{R}(C) = \mathcal{R}(C[S\setminus x]) \times D(x)$. In the latter case, $C$ satisfies Proposition 1 by induction. Therefore, we shall assume that $C$ is functional in $x$.
By induction, we know that $C[S\setminus x] = C^* \bigcap_{i=1..n} C_i$. Let $y \in \{1..n\}$ and $Y = S(C_y)$. Let $v_1,i = 1,2,3$ be the three possible assignments to $Y$. We assume without loss of generality that $x = x_{v_1,u_2,u_3}$ (hence, $D(x) = \{v_1,u_2,u_3\}$) and $(Y,x)$ are the last variables in $S$. Let $t \in \mathcal{R}(C(S[Y,x])$, and define $\phi_t : D(Y) \rightarrow D(x)$ such that $\phi_t(v) = \{u \in D(x) | (t,v,u) \in R(C)\}$. We distinguish three cases.
1. $\phi_t$ has range $\{u_i, u_j\}$ for some $i \neq j$. One of these two values, say $u_i$, has a preimage of size 2. Let $\{v_{p_1}, v_{p_2}\} = \phi_t^{-1}(\{u_i\})$, $v_1 \notin \{v_{p_1}, v_{p_2}\}$, and $t' = (t,v_1,u_2)$ and $t'' = (t,v_2,u_3)$ such that $t'_{\phi_t}[Y] = \emptyset$. The constraint $C_{t_1,t_2,t_3}^*$ has only the variables in $Y$ as active variables, and by arc consistency its relation must contain $(t,v_1,u_2), (t,v_2,u_3), (t,v_1,u_3)$. By Observation 2, $R$ must contain these tuples, a contradiction.
2. $\phi_t$ is bijective. Suppose that there exist $i,j$ such that $i \neq j$ and $\phi_t(v_i) = u_j$. Let $u_s \notin \{u_j,u_i\}$, $v' = (t,v_j,u_3)$ and $v'' = (t,v_1,u_2)$ be the permutation of $t'$ and $t''$ such that $t'_{\phi_t}[x] = u_h$. Recall that $t_1$ is one of the three tuples associated with the constraint $C = C_{t_1,t_2,t_3}$, and hence $t_1 \in R^*$, $t_1[Y] = v_1$, and $t_1[x] = u_j$. Then, the constraint $C_{t_1,t_2,t_3}^*$ has $x$ as the only active variable in its scope, and for every $u \in D(x)$ its relation must contain the tuple $t^u$ such that $t^u[l] = t_1[l]$ if $l \notin Y \cup \{x\}$, $t^u[Y] = \phi_t^{-1}(u_s)$, and $t^u[x] = u$. Note that at this point, Observation 2 cannot be applied because $t_1$ may not belong to $\mathcal{R}(C)$. Let $t'_{\phi_t}, t''_{\phi_t}$ be the permutation of $t_1,t',t''$ such that $t'_{\phi_t}[x] = u_h$. The constraint $C_{t_1,t_2,t_3}^*$ has only ground variables in its scope except $x$, and its relation $R^*$ must contain the tuple $t'$ such that $t'[x] = u_j$ and $t'[l] = t'[l]$ otherwise. However, since $R' \subseteq R$ we have $t_f \notin R$, a contradiction. Therefore, if $\phi_t$ is bijective then it must map every $v_1$ to $u_1$.
Now, suppose that there exists a partial tuple $t'$ such that $\phi_t$ is not equal to $\phi_t$. By Case 1 and the reasoning above, $\phi_t$ must map every $v_1$ to the same value $u_p$. Let $\{v_1,v_2\} = D(Y) \setminus v_p$. If we denote by $t_1^{h}, t_2^{h}, t_3^{h}$ the permutation of $(t',v_2,u_p)$, $(t',v_p,u_p)$ and $(t,v_1,u_1)$ such that $t_1^{h}[Y] = \psi$, the constraint $C_{t_1^{h},t_2^{h},t_3^{h}}$ has the only variables in $Y$ as active variables in its scope, and by arc consistency its relation must contain the tuple $(t,v_p,u_p)$. By Observation 2, this tuple must belong to $R$, a contradiction.
Finally, in this case every tuple must induce an index-equality between $Y$ and $x$. Therefore, we can add $x$ to the scope of $C_y$ and continue the induction.
3. $\phi_t$ has range $\{u\}$. By Cases 1 and 2, we know that the only situation where the induction may not hold is when $\phi_t$ is in this case for every partial tuple $t'$ and every choice of $Y$. For each $t' \in \mathcal{R}(C[S\setminus x])$ and index-equality constrained set of variables $Y$, we define $\mathcal{J}_{Y}(t')$ to be $t'$ plus the set of all tuples that differ from $t'$ only on the assignment to $Y$. By functionality, for each $t' \in \mathcal{R}(C[S\setminus x])$ we shall define $\psi(t')$ to be the sole value $u \in D(x)$ such that $(t',u) \in R$. It is immediate that $\psi(t') = \psi(t'^2)$ for each $t''$, $t'^2 \in \mathcal{J}_{Y}(t')$, for any fixed $Y$, $t'$. Furthermore, for any two tuples $t'$, $t'' \in \mathcal{R}(C[S\setminus x])$ such that $t'^2(S_{\lfloor 1,2 \rfloor}(C)) = t'^2(S_{\lfloor 1,2 \rfloor}(C))$, there exists $t_{Y_1}, \ldots, t_{Y_n}$ such that $t_{Y_1} \in \mathcal{J}_{Y_1}(t^2)$, $t_{Y_n} \in \mathcal{J}_{Y_n}(t_{Y_n})$, and for each $i$, $t_{Y_{i+1}} \in \mathcal{J}_{Y_i}(t_{Y_i})$. Unformally, starting from $t'$ one can obtain $t^2$ by changing the assignments to each $Y_i$ one by one. By transitivity of the equality, this means that $\psi(t^2) = \psi(t'^2)$. Since this is true for any pair $t^2, t^2$ that share the same values for $S_{\lfloor 1,2 \rfloor}(C)$, it follows that $C[S_{\lfloor 1,2 \rfloor}(C) \cup x]$ is functional in $x$, a contradiction with Lemma 3.
\[\square\]
**Theorem 1.** There exists an algorithm that decides in polynomial time if a constraint language $\Gamma$ admits a conservative Mal'tsev polymorphism, and outputs one if one exists.
Proof. By Lemma 1, we can look for a conservative minority polymorphism instead. The algorithm builds $IP^{\text{min}}(\Gamma)$ in time $O(l^3)$, where $l$ is the number of relations and $r$ are respectively the maximum number of tuples and the maximum arity of a relation. $IP^{\text{min}}(\Gamma)$ has $O(l^3 + d^3)$ constraints and $O(d^3)$ variables. Then, we enforce GAC in time $O(lt^4)$. By Proposition 1, assigning every variable $x_{v_1,v_2,v_3}$ with domain size $3$ to $v_1$ does not violate any constraint (since it respects index-equalities) and is consistent with every satisfying assignment to the remaining variables. Therefore, we can eliminate every variable with domain size $3$.
We are left with an instance whose active variables have domain size $2$, and if it has a solution its language must have a conservative minority polymorphism (conservative polymorphisms are preserved by GAC). Note that all minority operations coincide on 2-elements domains; therefore, we can rename each domain by $(0,1)$ (arbitrarily) and obtain a CSP instance whose language has the unique Boolean minority polymorphism
\(n(x, y, z) = x - y + z \mod 2\). This instance is equivalent to a system of linear equations over GF(2), and any such instance with \(n\) variables and \(m\) constraints can be solved in time \(O(n^2m)\) by Gaussian elimination. In our case, the running time is \(O(rlt^3d^6)\), and hence the complexity of the whole algorithm is \(O(rlt^3d^6 + rlt^4)\).
If we interface our detection algorithm with the algorithm of (Dyer and Richerby 2013), we obtain the following corollary.
**Corollary.** The class of constraint languages with a conservative Mal’tsev polymorphism is uniformly tractable.
5 Conservative majority constraints
Unlike conservative Mal’tsev polymorphisms, it is already known that conservative majority polymorphisms can be detected in polynomial time (Feder and Vardi 1998). The state-of-the-art algorithm, described in Section 2, has \(O(rlt^3d^6)\) time complexity (Bessiere et al. 2013). In the section, we will show that this algorithm can be greatly improved using the approach we described for conservative Mal’tsev polymorphisms.
As seen in Section 4, analyzing the structure of the indicator problem for languages of large arities can be tedious. Fortunately we need not do this twice, as languages with majority polymorphisms are \(2\)-decomposable: each constraint can be replaced by its binary projections without altering the solution set of the instance (Jeavons, Cohen, and Cooper 1998).
It is fairly straightforward to see that if a language \(\Gamma\) has a majority polymorphism, then the indicator problem of its \(2\)-decomposition \(\Gamma_2\) is equivalent to the \(2\)-decomposition of the indicator problem of \(\Gamma\). We denote by \(Ip^{cmaj}_{\Gamma_2}\) the indicator problem of order 3 of \(\Gamma_2\) with the additional constraints \(x_{v_1,v_1,v_2} \in \{v_1\}\), \(x_{v_1,v_2,v_3} \in \{v_1\}\), \(x_{v_2,v_1,v_3} \in \{v_1\}\) for every \(v_1, v_2 \in D\) and \(x_{v_1,v_2,v_3} \in \{v_1,v_2,v_3\}\) for every \(v_1, v_2, v_3 \in D\). The solutions of \(Ip^{cmaj}_{\Gamma_2}\) are exactly the conservative majority polymorphisms of \(\Gamma_2\).
Note that Observation 2 can be applied to \(Ip^{cmaj}_{\Gamma_2}\) since its proof only uses conservativity.
**Lemma 4.** If \(Ip^{cmaj}_{\Gamma_2}\) is GAC, the assignment
\[x_{u_1,u_2,u_3} \leftarrow (u_i \in D(x_{u_1,u_2,u_3}) \mid i \text{ is minimum})\]
is a solution.
**Proof.** We start by considering \(Ip^{cmaj}_{\Gamma_2}\) before GAC is applied. Let \(C^{rlt}_{t_1,t_2,t_3} = (S, R^*)\) be a constraint of \(Ip^{cmaj}_{\Gamma_2}\) with scope \(x_{u_1,u_2,u_3}, x_{v_1}, v_2, v_3\) such that both variables are active (i.e., \(|\{u_1,u_2,u_3\}| = 3\) and \(|\{v_1,v_2,v_3\}| = 3\), as otherwise the unary majority constraints would force the variable to be ground). Suppose that there exists a pair \(i \neq j\) such that \(t = (u_i, v_j) \in R\). Let \(k\) be the index such that \(k \notin \{i,j\}\) and \((t'_1,t'_2,t'_3)\) be the permutation of the tuples \(t, t_1, t_2, t_3\) such that \(t'_1[2] = v_1, t'_2[2] = v_2\) and \(t'_3[2] = v_3\). Consider the constraint \(C^{rlt}_{t'_1,t'_2,t'_3} = (S', R')\). The second variable in \(S'\) is \(x_{v_1,v_2,v_3}\) and after arc-consistency the first variable will be fixed to the value \(u_i\). Therefore, by Observation 2, after arc-consistency the constraint \(C^{rlt}_{t_1,t_2,t_3} = (S, R)\) will contain the tuple \((u_i, v)\) for every \(v \in D(x_{v_1,v_2,v_3})\). From this we can deduce that, after arc-consistency, for every \(i\) we have either \((u_i, v_i) \in R\) or \((u_i, v) \in R\) for every \(v\) in the domain of \(x_{v_1,v_2,v_3}\). In particular, if \(i\) and \(j\) are the minimum indices such that both \(u_i\) and \(v_i\) are in the domains, \((u_i, v_j)\) always belongs to \(R\).
**Theorem 2.** Conservative majority polymorphisms can be detected in time \(O(rlt^4)\) in constraint languages with \(l\) distinct relations of arity at most \(r\) and containing at most \(t\) tuples.
**Proof.** The algorithm starts by assuming that a conservative majority polymorphism exists. We build \(Ip^{cmaj}_{\Gamma_2}\) and enforce GAC in time \(O(rlt^3)\). Since \(Ip^{cmaj}_{\Gamma_2}\) is equivalent to \(Ip^{cmaj}_{\Gamma_2}\), we can use Lemma 4 to find a solution of the resulting instance. If this solution is a majority polymorphism of \(\Gamma\) (which can be verified in time \(O(rlt^3)\)) the algorithm returns YES; otherwise it returns NO. The complexity of the whole procedure is \(O(rlt^4)\).
This time bound improves on that of (Bessiere et al. 2013) by a factor of \(d^6\). Besides, the time complexity of our algorithm is roughly that of checking if a given conservative majority operation is a polymorphism of \(\Gamma\), so there is little room for improvement.
6 Conclusion
Using a detailed analysis of the indicator problem for conservative minority polymorphisms, we have designed a polynomial-time algorithm for detecting conservative Mal’tsev polymorphisms in arbitrary constraint languages, and obtained as a side result a greatly improved algorithm for detecting conservative majority polymorphisms.
As noted in the introduction, our results imply a uniform algorithm for constraint languages with a conservative Mal’tsev polymorphism. Motivated by Observation 1, we make the following conjecture.
**Conjecture 1.** There exists a uniform algorithm for constraint languages with a Mal’tsev polymorphism, and the meta-problem is decidable in polynomial-time.
The techniques we have developed in this paper make essential use of the fact that we are looking for conservative polymorphisms, and are unlikely to be sufficient to prove Conjecture 1 in its full generality. New ideas are needed, and it may be interesting to see if the algorithm from (Dyer and Richerby 2013) can be uniformized by using a different notion of compact representation of solution sets that only requires the promise that a Mal’tsev polymorphism exists.
References
|
{"Source-Url": "https://hal.science/hal-01230681v1/document", "len_cl100k_base": 10792, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 30053, "total-output-tokens": 12587, "length": "2e13", "weborganizer": {"__label__adult": 0.0005788803100585938, "__label__art_design": 0.0005054473876953125, "__label__crime_law": 0.0008296966552734375, "__label__education_jobs": 0.0023097991943359375, "__label__entertainment": 0.00020635128021240232, "__label__fashion_beauty": 0.0003571510314941406, "__label__finance_business": 0.0005240440368652344, "__label__food_dining": 0.0008111000061035156, "__label__games": 0.0014514923095703125, "__label__hardware": 0.001392364501953125, "__label__health": 0.002620697021484375, "__label__history": 0.0006613731384277344, "__label__home_hobbies": 0.00022614002227783203, "__label__industrial": 0.0010709762573242188, "__label__literature": 0.0011434555053710938, "__label__politics": 0.0007262229919433594, "__label__religion": 0.0010557174682617188, "__label__science_tech": 0.477294921875, "__label__social_life": 0.00018656253814697263, "__label__software": 0.0059661865234375, "__label__software_dev": 0.498046875, "__label__sports_fitness": 0.0005097389221191406, "__label__transportation": 0.0012607574462890625, "__label__travel": 0.0002903938293457031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38329, 0.02823]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38329, 0.60706]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38329, 0.84421]], "google_gemma-3-12b-it_contains_pii": [[0, 4844, false], [4844, 11803, null], [11803, 16099, null], [16099, 22925, null], [22925, 29495, null], [29495, 35446, null], [35446, 38329, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4844, true], [4844, 11803, null], [11803, 16099, null], [16099, 22925, null], [22925, 29495, null], [29495, 35446, null], [35446, 38329, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38329, null]], "pdf_page_numbers": [[0, 4844, 1], [4844, 11803, 2], [11803, 16099, 3], [16099, 22925, 4], [22925, 29495, 5], [29495, 35446, 6], [35446, 38329, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38329, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
291d30d2811f1a49b1efce17d14972646449fca4
|
SBISAF: A Service-Oriented Business and Information Systems Alignment Method
Aurelijus MORKEVICIUS¹, Saulius GUDAS², Darius SILINGAS³
¹ Kaunas University of Technology, Faculty of Informatics, Information Systems Department
Studentų 50-313a, LT-51368 Kaunas, Lithuania
² Vilnius University, Kaunas Faculty of Humanities
Muitinės 8, LT-44280 Kaunas, Lithuania
³ Vytautas Magnus University, Faculty of Informatics, Department of Applied Informatics
Vileikos 8-409, LT-44404 Kaunas, Lithuania
e-mail: aurelijus.morkevicius@stud.ktu.lt, gudas@vukhf.lt, darius.silingas@gmail.com
Received: February 2012; accepted: May 2013
Abstract. This paper presents a new approach for the business and information systems (IS) alignment consisting of a framework, metamodel, process, and tools for implementing it in practice. The purpose of the approach is to fill in the gap between the existing conceptual business and IS alignment frameworks and the empirical business and IS alignment methods. The suggested approach is based on the SOA, GRAAL, and enterprise modeling techniques such as TOGAF, DoDAF, and UPDM. The proposed approach is applied on four real world projects. Both the application results and the small example are provided to validate the suitability of the approach.
Keywords: enterprise architecture, enterprise modeling, business and IS alignment, SOA, service provisioning.
1. Introduction
Modern enterprises are establishing a process management as a must for successive planning and monitoring the performance of the processes. The constant monitoring of the process performance helps to identify bottle necks and their causes that are usually a workflow, software or hardware malfunction, or simply the lack of automation at a certain activity. It implies that the modern enterprise is facing a task of aligning its business needs with the software applications such as ERPs. The alignment of business and information systems (IS) is in contrast to what is often experienced in organizations: information systems and business professionals unable to bridge the gap between themselves. This gap generally results in expensive IS that do not provide expected return on investment (ROI). And the most common way of identifying bottle necks in business and IS alignment is the Enterprise Architecture (EA) which is a blueprint for how an organization achieves the current and future business objectives using information technologies (IT) (Dahalin et al., 2011). For this reason, enterprise architecture programs are established in enterprises wherein the enterprise architects seek to align enterprise processes and infrastructure with their supporting information systems (Wegmann et al., 2005).
The business and IS alignment has been studied in multiple distinct research areas such as an alignment via governance, alignment via communication, and alignment via architecture (Chen, 2008) where each is a subject of the different branch of science. Alignment via architecture is further classified into an alignment via software architecture and alignment via enterprise architecture. Alignment via enterprise architecture utilizes enterprise modeling, model analysis and design techniques and is in interest of our research.
Enterprise Architecture is a hot topic since 1987-ies when the first EA framework has been introduced by Zachman (1987). The EA was not very widely applied in practice due to lack of modeling languages and tools. Two decades have passed till the EA movement has been reinforced by the successful adoption of the Unified Modeling Language (UML) (OMG, 2007) and the Model-Driven Architecture (MDA) (OMG, 2003). There have been multiple attempts to apply Unified Modeling Language (UML) for enterprise architecture modeling (Dalgarno and Fowler, 2008), but many enterprise architects found it too complicated for solving their domain-specific problems (Silingas and Butleris, 2009a). However, the versatility of UML led to the appearance of multiple new modeling languages such as the Unified profile for MODAF and DoDAF (UPDM), System Modeling Language (SysML), Service Oriented Architecture Modeling Language (SoaML) etc (Morkevicius et al., 2010). UML and its compatibility with its extensions allowed integrating different languages based models, thus enabling creation of large and versatile EA models in one repository (Silingas and Butleris, 2009b). It helped solving a wide range of problems: business transformation into knowledge-based business, business and IT alignment, the computerization of business management tasks etc (Gudas, 2009). UML has also been adopted for the other EA modeling techniques such as TOGAF (The Open Group, 2009), which evolved significantly over the last years.
In this paper we are focusing into a subset of business and IT alignment problem; the problem of optimally fitting business and information system architectures together – the business and IS alignment. The purpose of our research is to propose a new approach for business and IS alignment based on the existing conceptual frameworks, theories, and the research of the latest enterprise modeling techniques.
This paper is structured as follows: in Section 2, the related works are analyzed; in Section 3, the proposed approach is presented; in Section 4, experimental evaluation and application of the proposed approach is described; in Section 5, the achieved results, conclusions, and future work directions are indicated.
2. Related Work
There are a number of business and IT alignment via architecture methods. All of them are applicable to the business and IS alignment as well. One of the most well known methods is Guidelines Regarding Architecture Alignment (GRAAL). The GRAAL is a conceptual framework providing a collection of concepts and relations among them. It is based on four simple dimensions: (i) system aspects, (ii) system aggregation, (iii) systems process, and (iv) description levels, where the first three dimensions focuses on the
system analysis by its observable properties, composite structure, and life cycle and the fourth one concerns the level of granularity. The goal of GRAAL is to derive operational guidelines for aligning IT architecture with business architecture (Van Eck et al., 2004). GRAAL framework originated from another well known alignment framework of Henderson and Venkatraman (1999) distinguishing two alignment dimensions, the service provision and refinement. Other related frameworks are Zachman (1987) framework and the Kruchten’s (1995) 4 + 1 model.
Systemic Enterprise Architecture Methodology (SEAM) business and IT alignment framework developed by Wegmann (2003) is grounded in General System Thinking (GST) (Weinberg, 1975), and living system theory (Miller, 1995). The two main SEAM concepts used to express the behavior and construction are functional and the organizational levels. Functional level represents the behavioral hierarchy and the organizational level represents the constructional hierarchy. Similar to GRALL it is a conceptual framework.
There are also works extending and applying GRALL (Zarvic and Wieringa, 2006) and SEAM (Wegmann et al., 2005) frameworks. However none of them provides a method to evaluate business and IS alignment in the particular EA model. In other words, all frameworks described above are fully conceptual frameworks; they neither provide a process nor they are adapted using with the most popular enterprise modeling languages, frameworks, and methods differently than our indents are. The integrity with empirical enterprise modeling methods such as DoDAF, MODAF, NAF, TOGAF, and modeling languages such as UML, BPMN, Archimate, EMM (Gudas et al., 2005), and UPDM provides traceability between business and application models; however they do not provide processes and tools of how to verify if business and IS is aligned. The study of empirical mostly used methods gave as a solid background for our research. As a result our proposed approach is based on the integrated metamodel developed on the basis of these methods. However, our goal is not a new modeling language. Our goal is an approach suitable to use in combination with the majority of empirical enterprise modeling techniques.
The closest method to ours is the BITAM (Chen et al., 2005) method based on the SOA. BITAM uses a twelve-step process for managing, detecting and correcting the misalignment at the architecture level. The method is an integration of two distinct analysis areas: (i) business analysis and (ii) architecture analysis – for aligning elements in three layers of a business system: (i) business strategy, (ii) business architecture, and (iii) IT architecture.
Most of the methods described in this chapter are not considering the non-functional aspect of the enterprise model. Quantitative evaluation of business and IS is also not a part of empirical enterprise modeling techniques; however it is important for our research. The GRALL framework defines quality system aspect (Van Eck et al., 2004) classified into for user and for developer sub aspects. The classification is close to the one used in ISO 9126 (ISO/IEC, 2004). ISO 9126 classifies software measurements into external metrics, internal metrics, and quality in use metrics. External and internal metrics are calculated by the software development team and the quality in use metrics are usually acquired.
by the user. There are a number of other quantitative schemes to evaluate IS, business and their interrelationship. A detailed study of parameters applying to various enterprise model elements and their interrelationships has been introduced in Gustafsson et al. (2009).
Another method for quantitative model driven evaluation of the enterprise model has been proposed by Morkevicius et al. (2010). The method is based on the SysML parametric model defined in OMG (2008b) and is executable by using the majority of enterprise modeling tools. Versatility of this method allows it to be used with any UML and SysML based domain specific modeling language.
3. Integrated Method for Business and IS Alignment
In this section we describe the framework, metamodel, process and related techniques that the proposed method is based on.
3.1. SBISAF Framework
The framework defines four aspects of analysis for the business and IS alignment and utilizes the metamodel described in the next section of this paper.
Similarly to GRAAL, SBISAF defines two system aspects: a service and quality. The service aspect originates from the behavioral nature. In SBISAF it is defining the environment of the service including its provider, consumer and a set of behaviors required for the service realization, thus we are calling it functional. The quality aspect from its origin is structural. In SBISAF it defines quantitative characteristics of a service, thus we are calling it non-functional. Besides the two we also define vertical and horizontal aspects of the business and IS alignment. Vertical and horizontal dimensions in the business and IS alignment have been first mentioned by Labowitz and Rocansky (1997). In SBISAF we have identified vertical and horizontal aspects by analyzing the nowadays enterprise modeling techniques. We consider the verticality as the traces between two different abstraction layers in the enterprise model. In our case it is the traceability between business and IS architectures. The horizontality is considered as the relationships between service provider and the service consumer at the same level of abstraction. For instance a graphical user interface (GUI) component directly accessed by the user and the web service providing the business logic. By not having a detail service contract defined, derivation of the vertical business and IS alignment is not possible.
We are defining four combinations of business and IS alignment aspects: vertical functional, horizontal functional, vertical non-functional and horizontal non-functional.
3.2. SBISAF Metamodel
According to the identified aspects of business and IS, we have clearly separated concepts into belonging to the business architecture and belonging to the information system architecture. We have also identified that some of the concepts do not fit to any
of them. For this reason we have used a concept of solution architecture, which originated from the MODAF architecture framework, where it describes the combination of systems and human resources (together grouped into resource configurations) used to implement business scenarios. As we have also defined a resource configuration and human resource concept that do not fit to the contents of the IS architecture, we are using a solution architecture concept to make a clear separation between the information system architecture and its constructs and a human resource and resource configuration concepts.
Inspired by Morkevicius and Gudas (2012), the Open Group (2009), and OMG (2009), we have defined business service as a service provided by a participant, requested by another participant, and supported by zero or more application services. A participant in the metamodel is a logical business unit which can abstract any of the following: human resource, application component, resource configuration, organization, department etc. An application service is a service supporting one or more business services. Application service that does not support business service we treat as a redundant. An application component is an actor requesting one or more application services from another application component.
We are describing a resource configuration concept to group interacting actors human resource and application together. It is common, especially in business architectures to show that the organizational resource consumes business service. However we are considering it as a bad practice resulting into the inexecutable model. As we have defined a human resource, it is the organizational resource in the solution architecture used as an actor of the IS. It means that the human resource performs one or more tasks with the help of the IS. However it cannot directly consume the application service by himself. It implies a human resource requires a proxy application component to indirectly access the application service to complete its task. A resource configuration is used to group both proxy application components and human resources.
From the behavioral point of view the business service is realized by the business process and the application service is realized by the application function. To follow the same pattern the vertical trace is added between the business process and application function. The trace is called implements.
In addition to already defined concepts we are using a measurement concept, which defines non-functional performance characteristics of a service. We are defining both a business measurement and an application measurement. The business measurement is constrained to measure only the business service and the application measurement is constrained to measure only the application service. The connection between both is called influences, as business measurements are directly dependent from the measurements of information systems. Another addition is the service level agreement (SLA) concept that the service must conform to. Conformance to the SLA is derived from measurements of the service. If the measurement of the service value is in the limits defined by the SLA, service is considered to be conforming to that SLA.
Based on the observations made and the metamodel built (Fig. 1), we are further defining four aspects of the business and IS alignment in details.
3.3. Functional Business and IS Alignment
In Morkevicius and Gudas (2012) a vertical functional business and IS alignment is defined as the alignment of business and application services where application service must serve business service, by supporting business processes performed; due to a business service would be delivered to a service consumer (Fig. 2a). A horizontal functional alignment of the business and IS is defined as a detailed interaction between two information systems where service requests from the requesting system have to be satisfied by one or more application services from the providing system (Fig. 2b).
To check if the vertical business and IS aspect is aligned, we are providing the list of business rules. The alignment is considered achieved only if all of the following rules are satisfied for every instance of a particular element:
1. An application service has to support one or more business services.
2. An application function has to implement one or more business processes.
3. An application component providing the application service has to implement one or more participants.
4. A Service provider (either a participant in case of business service or an application component in case of application service) should perform activities (either a business process in case of participant or an application function in case of application component) required for the service realization.
For the horizontal functional alignment to be achieved a set of rules needs to be satisfied:
1. All application service requests must be satisfied.
2. Requester and provider service interfaces must be compatible.
3. Resource configuration contents should include at least one application component if it is requesting or providing an application service.
3.4. Non-Functional Business and IS Alignment
Our proposed approach emphasizes the impact of IS parameters to the parameters of the business. For instance, we may think of measuring business process performance by using the control chart. At a point in time when the business process gets support from the IS its performance drops. As soon as employees learn using the IS the performance of the process starts rising quickly. We can see the direct impact of the IS system to the outcome of a process. The influence may be higher, lower or even critical if the failure of IS stops a process from delivering. It depends on many various circumstances within the enterprise.
The dependency of business measurements from the measurements of the IS we call a vertical non-functional business and IS alignment (Fig. 2c). It expands the service concept by adding performance parameters that we further simply call measurements. A single service can be measured by multiple measurements. We are separating two different
kinds of measurements. One is the application measurement used to define the performance of the application service and the other is the business measurement used to define the performance of the business service. Each application measurement attached to the application service must influence at least one business measurement. The constraint applies that the measured business service would be supported by the measured application service.
To check if business and IS are vertically non-functionally aligned we are coming up with the rule: an application measurement has to influence at least one business measurement.
Values required achieving by the application service is specified in the SLA between the requester and the provider of a service. SLAs can be defined in both the business and the IS architectures. We consider that the achievement of the SLA at the application layer implies the achievement of the SLA at the business layer if the vertical non-functional alignment is achieved. The achievement of the SLA is the horizontal non-functional business and IS alignment (Fig. 2d). We are defining the rule to check if the model is horizontally non-functionally aligned: a measurement value should not exceed limits defined in the SLA.
To conclude, we have defined four aspects of business and IS alignment (Table 1).
### Table 1
<table>
<thead>
<tr>
<th>Alignment level</th>
<th>Requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td>Vertical functional alignment</td>
<td>Each IS service must support at least one business service. Otherwise it is redundant.</td>
</tr>
<tr>
<td>Horizontal functional alignment</td>
<td>All service requests must be satisfied.</td>
</tr>
<tr>
<td>Vertical non-functional alignment</td>
<td>IS measurements, that the business measurements are influenced by, must be supplied.</td>
</tr>
<tr>
<td>Horizontal non-functional alignment</td>
<td>All SLAs for each of the services must be met.</td>
</tr>
</tbody>
</table>
3.5. **A Process of Verifying Business and IS Alignment**
The method is incomplete without the process of application and tools. We define the process of applying the proposed business and IS alignment method in this section.
A Functional and non-functional alignment aspects take the similar role in the IS engineering and reengineering processes; both need to be achieved before the development of the IS is started. We are recommending to perform functional and non-functional alignment checks as soon as the high-level system design (as a part of EA program) is developed. Tests for the horizontal non-functional alignment additionally requires test data and is much more difficult to perform especially before or at the early phases of the information system engineering process. Thus we are proposing to perform horizontal non-functional alignment tests as early as possible, but no later than the IS testing phase.
Looking wider to the enterprise information systems architecture where more than one system interacts to each other, both the functional and non-functional tests should be performed in every iteration of the EA development method to detect possible bottlenecks that may lead to the reengineering of one or more information systems.
The process of alignment of the business and IS starts from defining a model scope we want to check in order to reduce the complexity if checking the whole model at once. We also need to make sure that SBISAF concepts are well established before proceeding. This is important for both the functional and non-functional alignments; however it may be performed once before checking the functional business and IS alignment. When these two prerequisites are complete, we start with the evaluation of the horizontal functional alignment. The whole process is provided in the Fig. 3. If the model is horizontally functionally aligned we are proceeding with the vertical functional alignment check. As soon as we found issues making the model unaligned, we identify the source for each of the issues and resolve them. We restart the loop to check if all issues have been resolved and if no new issues are found. If there are no issues found we proceed to the verification of
the vertical non-functional alignment. As soon as we found issues making the model unaligned, we identify the source for each of the issues and resolve them. If there are no new issues found, the model is treated as a completely vertically non-functionally aligned and we continue with the horizontal non-functional alignment check which is done similarly. However, it includes two additional steps: (i) instantiation of the service structure and (ii) acquisition of measurement values.
By having multiple business and application services and measuring their alignment, we can come up with various statistics such as the percentage of overall alignment, percentage of functional alignment etc.
4. Experimental Evaluation
As we have already defined the process, we need to define tools. We encourage using UML based standards for the enterprise modeling. Our choice has been influenced mostly by the number of UML based tools in the market. The decision of using UML led us to the realization of SBISAF in UML – we have developed UML profile for SBISAF. The verification of the alignment of the business and IS is performed by using object constraint language (OCL). For instance checking of whether the application function is implementing at least one business process is performed by executing the following OCL expression:
```
Context ApplicationFunction Inv: let i:Set (UML2_Metamodel::Dependency) = self.clientDependency->select(e|e.oclIsTypeOf(SBISAF_Profile::Implements)) in let p:Bag(UML2_Metamodel::NamedElement) = i.supplier->select(e|e.oclIsTypeOf(SBISAF_Profile::BusinessProcess)) in not p->isEmpty()
```
A set of executable OCL expressions have been developed to check if all four aspects of the conceptual business and IS alignment are modeled correctly. Together with the SBISAF profile, both are forming the contents for the plug-in for the MagicDraw CASE tool. For the experimental evaluation we are using the implemented plug-in with the MagicDraw UPDM tool. Our model is based on the UPDM modeling standard with slight extensions required by SBISAF and not supported by the UPDM standard. For instance our extension profile allows classification of services and measures into business and IS domains and provides relationships between the domains such as supports and influences. For this reason we have used mapping between the SBISAF profile and the UPDM language defined in Morkevicius and Gudas (2012). Additionally we have expanded mapping table with non-functional business and IS alignment concepts (Table 2).
4.1. Case Study
For the case study we have modified scenario of an e-shop defined in Morkevicius and Gudas (2012) to show the applicability of SBISAF. The retail unit in the e-shop consists
Table 2
SBISAF profile to UPDM elements mapping
<table>
<thead>
<tr>
<th>SBISAF element</th>
<th>UPDM element</th>
</tr>
</thead>
<tbody>
<tr>
<td>Business Service</td>
<td>Service Access provided in the Operational Viewpoint</td>
</tr>
<tr>
<td>Application Service</td>
<td>Service Access provided in the Systems Viewpoint</td>
</tr>
<tr>
<td>Participant</td>
<td>Node Role typed by a Performer</td>
</tr>
<tr>
<td>Human Resource</td>
<td>Resource Role typed by a Person Type or Organization Type</td>
</tr>
<tr>
<td>Application Component</td>
<td>Resource Role typed by Software</td>
</tr>
<tr>
<td>Resource Configuration</td>
<td>Resource Role typed by a Capability Configuration</td>
</tr>
<tr>
<td>Consumes</td>
<td>Request port owned by a Performer or Software and typed by a Service Access</td>
</tr>
<tr>
<td>Provides</td>
<td>Service port owned by a Performer or Software and typed by a Service Access</td>
</tr>
<tr>
<td>Business Process implemented</td>
<td>Implements</td>
</tr>
<tr>
<td>by Application Function</td>
<td></td>
</tr>
<tr>
<td>Participant implemented by</td>
<td>Implements</td>
</tr>
<tr>
<td>Resource</td>
<td></td>
</tr>
<tr>
<td>Business Measurement</td>
<td>Property owned by Service Access provided in the Operational Viewpoint</td>
</tr>
<tr>
<td>Application Measurement</td>
<td>Property owned by Service Access provided in the Systems Viewpoint</td>
</tr>
<tr>
<td>Influences</td>
<td>–</td>
</tr>
<tr>
<td>SLA</td>
<td>Default value for the Property</td>
</tr>
<tr>
<td>Conforming Service</td>
<td>Default value for the Property owned by a Service Access</td>
</tr>
<tr>
<td>Service Architecture</td>
<td>Logical Architecture. Used to bind business measurements and application measurements to the SysML parametric model based calculations.</td>
</tr>
</tbody>
</table>
of the following participants: sales unit, tech. support unit, supplier, supply unit, and post unit. It also interacts with the external participant – customer. We have built the DoDAF operational resource flow description (OV-2) model to show the logical architecture of the e-shop retail unit (Fig. 4a). Shortly describing the model itself, it is used to show resource flows (in our case information flows only) between business units (participants) in the particular context. In this diagram the context is the retail unit.
Participants in the example model are providing and requesting business services. They are communicating through service channels. For example in Fig. 4b customer and tech. support unit are participants where the square on the border of customer is a request port requesting the customer support business service and the square on the border of tech. support unit is a service port providing the requested customer support business service. Participants are also exchanging information where issue goes to tech. support unit and resolution goes to customer.
This kind of modeling approach of services is defined in SoaML specification (OMG, 2008a). A subset of SoaML concepts are used in the UPDM modeling language (OMG, 2009).
We have also built DoDAF systems interface description (SV-1) model to show how human resources are interacting with the application components used in business (Fig. 6). In the systems layer we have used application services. We have also defined provided and required interfaces. For instance in Fig. 5 the tech. support application component is providing the customer support application service which realizes multiple service interfaces. A service interface provision is shown as a lollipop on the service port. There are also two request ports; one attached to the tech. support staff and the other to the customer human resource. A service request is shown as a socket on the request port. Both request ports are requesting the same service but if you take a closer look you will find requests for different service interfaces.
4.2. **Functional Business and IS Alignment Evaluation**
First, we have checked the model by the horizontal functional alignment rule: all required interfaces are either provided in the particular context or delegated to the outside.
If looking to the systems interface description diagram (Fig. 6) note that the service channel between the *e-shop* and *customer* is highlighted. This is how the MagicDraw
tool indicates the failure of the executed OCL rule. It also provides a list of failures in the validation results pane (Fig. 8). In Fig. 6 the horizontal functional alignment rule failed because of incompatible interfaces at the service and request ports. In this particular case the return service requested by the customer is not provided by any of the information systems. By identifying this business and applications alignment gap in the e-shop enterprise we identified the model as not horizontally functionally aligned. As a resolution, we have integrated the return service to the model by providing all required relationships to make sure the model is horizontally functionally aligned.
Second, to make sure the business and IS is vertically functionally aligned we have validated the model by the following rule: each application service is supporting at least one business service.
According to the SBISAF metamodel, business services are supported by the application services. We have built a matrix of services to analyze if it is correct in our case (Fig. 7). We have identified that the return application service is not supporting any of the business services in the architecture. By performing detailed analysis it was indicated that there are no business processes realizing the return service. As a resolution, missing relationships have been added into the model. After making the model functionally aligned we are further continuing with the check of the non-functional business and IS alignment.
4.3. Non-Functional Business and IS Alignment Evaluation
By having the functionally aligned architecture, first, we check the vertical non-functional alignment to make sure the relationships between business and application measurements are established and the horizontal non-functional alignment check can be performed. As a prerequisite for the check we have defined the availability measurement for each service
in the e-shop enterprise. We have also defined the SLA value for each of the measurements. Conceptually the service is complying with the SLA, however in the model we are defining SLA values as the control limits for each of the measurements we want to check.
For the influence specification between measurements we are using an approach inspired by Jonson et al. (2007). The proposed approach uses the influence diagram that is a network used for modeling uncertain variables and decisions, consisting of a directed graph $G = (N, A)$, where the set of nodes is $N$ and the set of arcs is $A$. For each node probability distributions are represented in conditional probability matrices. We consider each measurement as a node and each influences relationship as an arc. By creating matrices (Fig. 8) for all chance nodes Bayesian network is established as it is shown in Fig. 11. Please note that this is the instantiated service model so the influence relationships between measurements are not displayed. Instead the links between service instances are shown.
For calculations we use a model driven approach described in Morkevicius et al. (2010). We have built the SysML parametric diagram for the retail business service. We have also instantiated the service structure Fig. 11. By executing the parametric diagram using the MagicDraw tool, we have calculated that the retail service availability is 0.774% (Fig. 11). However the minimal expected value defined in the SLA is 0.97%. It means that the horizontal non-functional alignment in the scope of the retail architecture model is not achieved with the particular configuration of services.
A horizontal non-functional alignment is achieved if the availability values of services match the minimal expected values at a particular point in time. For instance if the availability of customer support service is expected to be not less than 0.94%; the value of 0.93% at a present point (Fig. 10) is less than expected. It means the SLA has been broken and the model is not horizontally non-functionally aligned.
Concluding the example provided, in the scope of the retail business service we have discovered multiple issues preventing the complete alignment of the business and IS. Functional alignment issues have been fixed, however non-functional alignment issues require changes in the SLA or the deeper investigation of their causes and the impact to the business. It is also recommended to perform more checks with alternative sources of data. Finally, all issues discovered need to be resolved and the model needs to be revalidated.
4.4. Application of SBISAF
We have applied SBISAF on four real world industry projects A, B, C, and D. All projects are under the non-disclosure agreement, thus we are providing only the results of the business and IS alignment checks. All projects are UPDM projects. Three of them contain around 100,000 elements and one contains around 300,000 elements. The project A consists of 9 business services and 8 application services. The project B consists of 2 business services and 10 application services. Project C consists of 24 business services and 86 application services. Project D consists of 8 business and 5 application services. Below are the results containing the number and percentage of executions and violations of the rules for the SBISAF application on all four projects (Fig. 12).
Based on the results we can do early conclusions that the largest number of rules is executed during the horizontal non-functional alignment checks (45% of all executed rules). The least number of rules is executed during the vertical non-functional alignment checks (4% of all executed rules). In comparison the number of violations is highest for horizontal functional alignment checks (46% of all executed rules). The least number of violated rules is detected during the vertical non-functional alignment checks (3% combined).
The violated per executed rules ratio being the highest for the horizontal functional alignment aspect allows for identifying it as the most vulnerable aspect of the business and IS alignment. It means the violations in it are the most likely to be detected. The most solid aspect with the lowest violated per executed rules ratio is vertical non-functional. Comparing functional versus non functional alignment aspects, it is determined that 51% of all executed rules belong to the functional aspect. It is also determined that 59% of all violated rules belong to the functional aspect.
If comparing projects, according to violated per executed rules ratio, project C gets the highest score equal to 0.093. Scores for other projects are as follows: A = 0.015,
B = 0.007, D = 0.016. The ratio equal to 0 shows that there are no violations detected and the model is completely aligned. The lower the value is the higher level of alignment is achieved in the enterprise model. The ratio value to be achieved is the subject to choose for a particular organization.
We have fixed the errors detected during the first iteration of business and IS alignment checks and performed the second iteration of checks. We have calculated the overall violated per executed rules ratio for each business and IS alignment aspect for both the first and the second iterations (Fig. 13). We have also calculated the overall violated per executed rules ratio during the second iteration (0.003) and compared it to the one calculated during the first iteration (0.061). The difference is the increment in 0.058 (0.06). This allows us to conclude that iterative application of SBISAF allows increasing the business and IS alignment level in the enterprise model until the targeted level is reached.
5. Conclusions and Future Works
The analysis of existing business and IS alignment methods revealed us that there are multiple different methods available. We have also identified that the majority of the existing methods are conceptual and thus do not provide the alignment process and are not applicable to the existing enterprise modeling techniques. To bridge this gap we have proposed the business and IS alignment method consisting of four aspects of alignment (framework), metamodel, and process. We have implemented the method using UML in the MagicDraw CASE tool. We have also showed its integrity with the existing and widely used UPDM enterprise modeling language.
SBISAF advantages over other methods are the alignment process, repeatability, metamodel, and applicability to multiple enterprise modeling techniques. As the disadvantages we have identified the lack of quantitative approach to measure business and IS alignment and the focus to only IS architecture by ignoring other domains of IT such as the technology infrastructure.
The proposed approach is already applied on four real world projects. While applying SBISAF we have determined the following:
1. The horizontal functional business and IS alignment aspect is the most vulnerable (failures are the most expected).
2. The horizontal non-functional business and IS alignment aspect is the most stable (failures are the least expected).
3. The second iteration in applying SBISAF revealed that we have managed to reduce the ratio of violated per executed rules by the average value of 0.06. This allows us to conclude that the iterative application of SBISAF allows increasing the business and IS alignment level in the enterprise model.
4. According to the experience of the enterprise architect or any other role solving business and IS misalignments, number of alignment iterations required reaching the desired level of business and alignment may vary.
According to the conclusions made we have indicated the following improvement areas for SBISAF: (i) propose modifications for existing enterprise modeling methods to provide views for the better business and IS alignment specification, (ii) describe the way to calculate the business and IS alignment index, (iii) apply method to even more real world EA projects.
References
OMG (2008). *Service Oriented Architecture Modeling Language (SoaML)*. Needham, MA, USA.
A. Morkevičius received degree of master of business informatics in 2009. He is a doctoral student since 2009 at Kaunas University of Technology Information Systems Department (ISD). He is currently working as a solutions architect at No Magic Europe, a vendor of famous modeling platform MagicDraw. He is also one of the head architects for the UPDM standard in the Object Management Group (OMG). His research interests include enterprise architecture frameworks, enterprise modeling, enterprise model analysis and simulation, and business and IT alignment.
S. Gudas is a doctor habilitatus of computer sciences, associate professor of Information Systems Department of Kaunas University of Technology and Kaunas Faculty of Humanities of Vilnius University, dean of Kaunas Faculty of Humanities of Vilnius University. His research interests include computer-aided information systems engineering methods and tools, enterprise modelling for information systems engineering.
D. Silingas received his PhD in informatics from Vytautas Magnus University in 2005. He is currently working as a head of Solutions Department at No Magic Europe, a vendor of famous modeling platform MagicDraw. Darius is also a part-time associate professor at Vytautas Magnus University and visiting lecturer at ISM Executive School. His research interests include model-based system engineering and business process management.
SBISAF: A Service-Oriented Business and Information Systems Alignment Method
SBISAF: I paslaugas orientuota architektūra pagrįstas veiklos ir informacinių sistemų suderinamumo metodas
Aurelijus MORKEVIČIUS, Saulius GUDAS, Darius ŠILINGAS
Straipsnyje pristatomas naujas veiklos ir informacinių sistemų suderinamumo metodas susidedantis iš karkaso, meta modelio, proceso ir įrankių metodo realizacijai. Metodu tikslas užpildyti spragą tarp konceptualų veiklos ir IS suderinamumo karkasų ir empirinių veiklos architektūros modelia- vimo ir analizės metodų. Pristatomas metodas yra pagrįstas SOA, GRAAL ir veiklos architektūros modeliavimo metodais, tokiais kaip TOGAF, DoDAF ir UPDM. Metodus pritaikytas keturiems realaus pasaulio projektams. Straipsnyje pateikiami metodo taikymo rezultatai ir pavyzdys.
|
{"Source-Url": "https://www.mii.lt/Informatica/pdf/INFO908.pdf", "len_cl100k_base": 8440, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 41121, "total-output-tokens": 10619, "length": "2e13", "weborganizer": {"__label__adult": 0.0005822181701660156, "__label__art_design": 0.004055023193359375, "__label__crime_law": 0.0007739067077636719, "__label__education_jobs": 0.0154571533203125, "__label__entertainment": 0.0002663135528564453, "__label__fashion_beauty": 0.0004048347473144531, "__label__finance_business": 0.01351165771484375, "__label__food_dining": 0.0006022453308105469, "__label__games": 0.0011959075927734375, "__label__hardware": 0.0015687942504882812, "__label__health": 0.0010395050048828125, "__label__history": 0.0011835098266601562, "__label__home_hobbies": 0.00020754337310791016, "__label__industrial": 0.0017404556274414062, "__label__literature": 0.0012378692626953125, "__label__politics": 0.0005779266357421875, "__label__religion": 0.0008072853088378906, "__label__science_tech": 0.345703125, "__label__social_life": 0.00022614002227783203, "__label__software": 0.032470703125, "__label__software_dev": 0.57470703125, "__label__sports_fitness": 0.0003399848937988281, "__label__transportation": 0.0011835098266601562, "__label__travel": 0.0003635883331298828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47327, 0.02747]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47327, 0.08721]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47327, 0.91117]], "google_gemma-3-12b-it_contains_pii": [[0, 2711, false], [2711, 6000, null], [6000, 9409, null], [9409, 12262, null], [12262, 15700, null], [15700, 17132, null], [17132, 18502, null], [18502, 21493, null], [21493, 22795, null], [22795, 25529, null], [25529, 29167, null], [29167, 30002, null], [30002, 30411, null], [30411, 32349, null], [32349, 34001, null], [34001, 36280, null], [36280, 37042, null], [37042, 39109, null], [39109, 42746, null], [42746, 46523, null], [46523, 47327, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2711, true], [2711, 6000, null], [6000, 9409, null], [9409, 12262, null], [12262, 15700, null], [15700, 17132, null], [17132, 18502, null], [18502, 21493, null], [21493, 22795, null], [22795, 25529, null], [25529, 29167, null], [29167, 30002, null], [30002, 30411, null], [30411, 32349, null], [32349, 34001, null], [34001, 36280, null], [36280, 37042, null], [37042, 39109, null], [39109, 42746, null], [42746, 46523, null], [46523, 47327, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47327, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47327, null]], "pdf_page_numbers": [[0, 2711, 1], [2711, 6000, 2], [6000, 9409, 3], [9409, 12262, 4], [12262, 15700, 5], [15700, 17132, 6], [17132, 18502, 7], [18502, 21493, 8], [21493, 22795, 9], [22795, 25529, 10], [25529, 29167, 11], [29167, 30002, 12], [30002, 30411, 13], [30411, 32349, 14], [32349, 34001, 15], [34001, 36280, 16], [36280, 37042, 17], [37042, 39109, 18], [39109, 42746, 19], [42746, 46523, 20], [46523, 47327, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47327, 0.14689]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f9f9414f0b8a08f4c3e4ecee2e5a88dc0a5e1074
|
Interactive Learning of Parsers from Weak Supervision
Luke Zettlemoyer
with Luheng He, Kenton Lee, Mike Lewis, Julian Michael
Interpreting Language
Sentence
Semantic Parser
Meaning Representation
Executor
Response
Semantic Parsing: QA
How many people live in Seattle?
Semantic Parser
SELECT Population FROM CityData where City=="Seattle";
Executor
620,778
[Wong & Mooney 2007],
[Zettlemoyer & Collins 2005, 2007],
[Kwiatkowski et.al 2010, 2011],
[Liang et.al. 2011], [Cai & Yates 2013],
[Berant et.al. 2013,2014,2015],
[Kwiatkowski et.al. 2013],
[Reddy et.al, 2014,2016]
Go to the third junction and take a left
\[
\text{(do-seq})(\text{do-n-times} 3
\quad (\text{move-to forward-loc}
\quad (\text{do-until}
\quad (\text{junction current-loc}
\quad (\text{move-to forward-loc}))))
\quad (\text{turn-right})\]
Semantic Parsing: Instructions
Chen & Mooney 2011
Matuszek et.al. 2012
Artzi & Zettlemoyer 2013
Mei et.al. 2015
Somerset Maugham was a British playwright, novelist and short story writer.
Knowledge Base (KB)
Semantic Parser
<table>
<thead>
<tr>
<th>S. Maugham</th>
<th>Nationality</th>
<th>United Kingdom</th>
</tr>
</thead>
<tbody>
<tr>
<td>S. Maugham</td>
<td>Profession</td>
<td>Novelist</td>
</tr>
</tbody>
</table>
[Krishnamurthy and Mitchell; 2012, 2014][Choi et al., 2015]
Semantic Parsing: Complex Structure
How many people live in Seattle
Latent
620,778
Lots of Different Applications
We are doing semantic analysis for:
- Visual Semantic Role Labeling [Yatskar et al, 2016]
- Visual Question Answering [FitzGerald et al, in prep]
- Language to Code [Lin et al, in prep]
- Entity-entity sentiment [Choi et al, 2016]
- Understanding Cooking Recipes [Kiddon et al, 2016]
- Zero-shot Relation Extraction [Levy et al, in review]
- Interactive Learning for NLIDBs [Iyer, et al, in review]
Challenge: typically gather data and learn model from scratch in each case…
Understanding Cooking Recipes
Amish Meatloaf (http://allrecipes.com/recipe/amish-meatloaf/, recipe condensed)
Ingredients
2 pounds ground beef
2 1/2 cups crushed butter-flavored crackers
1 small onion, chopped
2 eggs
3/4 cup ketchup
1/4 cup brown sugar
2 slices bacon
Preheat the oven to 350 degrees F (175 degrees C).
In a medium bowl, mix together ground beef, crushed crackers, onion, eggs, ketchup, and brown sugar until well blended.
Press into a 9x5 inch loaf pan.
Lay the two slices of bacon over the top.
Bake for 1 hour, or until cooked through.
Approach: unsupervised learning for actions and object flow
Open Question:
• Can we build an off-the-shelf parser that would help here?
[Kiddon et al 2015, 2016]
Towards Broad Coverage Semantic Parsing
• Can we crowdsource semantics?
• Train with latent syntax?
• Build fast and accurate parsers?
• Actively select which data to label?
Semantic Role Labeling (SRL)
who did what to whom, when and where?
- They
- increased
- the rent
- drastically
- this year
- Agent
- Predicate
- Patent
- Manner
- Time
- Defining a set of roles can be difficult
- Existing formulations have used different sets
Existing SRL Formulations and Their Frame Inventories
**FrameNet**
1000+ semantic frames, roles (frame elements) shared across frames
**PropBank**
10,000+ frame files with predicate-specific roles
**Frame: Change_position_on_a_scale**
This frame consists of words that indicate the change of an Item's position on a scale (the Attribute) from a starting point (Initial_value) to an end point (Final_value). The direction (Path) …
**Lexical Units:**
..., reach.v, rise.n, rise.v, rocket.v, shift.n, …
**Roleset Id:** rise.01, go up
- **Arg1-:** Logical subject, patient, thing rising
- **Arg2-EXT:** EXT, amount risen
- **Arg3-DIR:** start point
- **Arg4-LOC:** end point
- **Argm-LOC:** medium
Unified Verb Index, University of Colorado [http://verbs.colorado.edu/verb-index/](http://verbs.colorado.edu/verb-index/)
PropBank Annotation Guidelines, Bonial et al., 2010
FrameNet II: Extended theory and practice, Ruppenhofer et al., 2006
Our Annotation Scheme
Given sentence and a verb:
They *increased* the rent this year.
Step 1: Ask a question about the verb:
Who increased something?
Step 2: Answer with words in the sentence:
They
Step 3: Repeat, write as many QA pairs as possible...
What is increased?
the rent
When is something increased?
this year
[He et al 2015]
Our Method: Q/A Pairs for Semantic Relations
The rent rose 10% from $3000 to $3300.
**Wh-Question**
What rose?
How much did something rise?
What did something rise from?
What did something rise to?
**Answer**
the rent
10%
$3000
$3300
Dataset Statistics
- Sentences: 1,241 (newswire) vs 1,959 (Wikipedia)
- Verbs: 3,336 (newswire) vs 4,440 (Wikipedia)
- QA Pairs: 8,109 (newswire) vs 10,798 (Wikipedia)
Cost and Speed
- Part-time freelancers from upwork.com (hourly rate: $10)
- ~2h screening process for native English proficiency
## Wh-words vs. PropBank Roles
<table>
<thead>
<tr>
<th></th>
<th>Who</th>
<th>What</th>
<th>When</th>
<th>Where</th>
<th>Why</th>
<th>How</th>
<th>HowMuch</th>
</tr>
</thead>
<tbody>
<tr>
<td>ARG0</td>
<td>1575</td>
<td>414</td>
<td>3</td>
<td>5</td>
<td>17</td>
<td>28</td>
<td>2</td>
</tr>
<tr>
<td>ARG1</td>
<td>285</td>
<td>2481</td>
<td>4</td>
<td>25</td>
<td>20</td>
<td>23</td>
<td>95</td>
</tr>
<tr>
<td>ARG2</td>
<td>85</td>
<td>364</td>
<td>2</td>
<td>49</td>
<td>17</td>
<td>51</td>
<td>74</td>
</tr>
<tr>
<td>ARG3</td>
<td>11</td>
<td>62</td>
<td>7</td>
<td>8</td>
<td>4</td>
<td>16</td>
<td>31</td>
</tr>
<tr>
<td>ARG4</td>
<td>2</td>
<td>30</td>
<td>5</td>
<td>11</td>
<td>2</td>
<td>4</td>
<td>30</td>
</tr>
<tr>
<td>ARG5</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>AM-ADV</td>
<td>5</td>
<td>44</td>
<td>9</td>
<td>2</td>
<td>25</td>
<td>27</td>
<td>6</td>
</tr>
<tr>
<td>AM-CAU</td>
<td>0</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>23</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>AM-DIR</td>
<td>0</td>
<td>6</td>
<td>1</td>
<td>13</td>
<td>0</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>AM-EXT</td>
<td>0</td>
<td>4</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>AM-LOC</td>
<td>1</td>
<td>35</td>
<td>10</td>
<td>89</td>
<td>0</td>
<td>13</td>
<td>11</td>
</tr>
<tr>
<td>AM-MNR</td>
<td>5</td>
<td>47</td>
<td>2</td>
<td>8</td>
<td>4</td>
<td>108</td>
<td>14</td>
</tr>
<tr>
<td>AM-PNC</td>
<td>2</td>
<td>21</td>
<td>0</td>
<td>1</td>
<td>39</td>
<td>7</td>
<td>2</td>
</tr>
<tr>
<td>AM-PRD</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>AM-TMP</td>
<td>2</td>
<td>51</td>
<td>341</td>
<td>2</td>
<td>11</td>
<td>20</td>
<td>10</td>
</tr>
</tbody>
</table>
Advantages
• Easily explained
• No pre-defined roles, few syntactic assumptions
• Can capture implicit arguments
• Generalizable across domains
Limitations
• Only modeling verbs (for now)
• Not annotating verb senses directly
• Can have multiple equivalent questions
Challenges
• What questions to ask?
• Quality - Can we get good Q/A pairs?
• Coverage - Can we get all the Q/A pairs?
Towards Broad Coverage Semantic Parsing
- Can we crowdsourcing semantics?
- Train with latent syntax?
- Build fast and accurate parsers?
- Actively select which data to label?
John denied the report
John refused to deny the report
John refused to confirm or deny the report
Joint vs. Pipelines
F1
CCG Dependencies
Include nearly all SRL dependencies:
\[
\begin{align*}
\text{John} & \quad \text{wanted} \quad \text{to confirm} \quad \text{the report} \\
\text{NP}_{\text{John}} & \quad (S\backslash\text{NP}_x)/(S\backslash\text{NP}_x) & \quad (S\backslash\text{NP}_x)/\text{NP}_y & \quad \text{确认} \rightarrow x, \text{确认} \rightarrow y \\
\text{wanted} & \rightarrow x & \text{to confirm} & \rightarrow x, \text{确认} \rightarrow y \\
\text{the report} & \rightarrow x & \text{确认} & \rightarrow \text{report}, \text{确认} \rightarrow x \\
\text{wanted} & \rightarrow x & \text{确认} & \rightarrow \text{report}, \text{确认} \rightarrow x, \text{wanted} \rightarrow x \\
\text{S} & \quad \text{确认} \rightarrow \text{report}, \text{确认} \rightarrow \text{John}, \text{wanted} \rightarrow \text{John}
\end{align*}
\]
[Lewis et al, 2015]
Training
Learn latent CCG that recovers SRL
He opened the door
ARG0
ARG1
Training
Learn latent CCG that recovers SRL
• Generate *consistent* CCG/SRL parses for training sentences
Training
Learn latent CCG that recovers SRL
• Mark subset as correct, based on semantic dependencies
Training
Learn latent CCG that recovers SRL
- Optimize marginal likelihood
SRL Results
[Lewis et al 2015]
Out-of-domain SRL Results
F1 scores for different approaches:
- Riedel
- Zhao
- Che
- Vickrey
- Pipeline
- Joint
The Joint approach has the highest F1 score, followed by Vickrey, Che, and Pipeline. Riedel and Zhao have the lowest scores.
Towards Broad Coverage Semantic Parsing
- Can we crowdsource semantics?
- Train with latent syntax?
- Build fast and accurate parsers?
- Actively select which data to label?
Global A* Parsing
**Challenge:**
Global models (e.g. Recursive NNs) break dynamic programs
**Our approach:**
Combine local and global models in A* parser
**Result:**
Accurate models with formal guarantees
[Lee et al, 2016, EMNLP best paper]
Fruit flies like bananas
Klein and Manning, 2001
Parsing with Hypergraphs
Fruit flies like bananas
Input
Output
Fruit flies like bananas
Each hyperedge $e$ is weighted with a score $g(e)$
Parsing with Hypergraphs
Score of parse derivation:
\[ g(y) = \sum_{e \in y} g(e) \]
Fruit flies like bananas
\[
\begin{align*}
\text{NP/ NP} & \rightarrow \text{NP} \\
\text{NP} & \rightarrow \text{NP} \\
\text{(S\(\backslash\)NP)/NP} & \rightarrow \text{S\(\backslash\)NP} \\
\text{S\(\backslash\)NP} & \rightarrow \text{NP}
\end{align*}
\]
Predicted parse: \( y^* = \arg\max_{y \in Y} g(y) \)
- Exponential number of nodes
\[ \rightarrow \] Intractable inference
Managing Intractable Search Spaces
Approximate inference with global expressivity, e.g.
- Greedy / beam search:
- Nivre, 2008
- Chen and Manning, 2014
- Andor et al., 2016
- Reranking:
- Charniak and Johnson, 2005
- Huang, 2008
- Socher et al., 2013
Locally Factored Parsing
Scores condition on local structures
- Make locality assumptions:
- e.g. features are local to CFG productions
- Polynomial number of nodes
- Dynamic programs enable tractable inference
Locally Factored Parsing
Scores condition on local structures
Dynamic programs with locally factored models, e.g.
- CKY:
- Collins, 1997
- Durrett and Klein, 2015
- Minimum spanning tree:
- McDonald et al., 2005
- Kiperwasser and Goldberg, 2016
Locally Factored Parsing
Scores condition on local structures
Fruit flies like bananas
Dynamic programs with locally factored models, e.g.
Recursive neural networks break dynamic programs!
Scores condition on local structures
Fruit flies like bananas
Dynamic programs with locally factored models, e.g.
Recursive neural networks break dynamic programs!
- Minimum spanning tree:
- McDonald et al., 2005
- Kiperwasser and Goldberg, 2016
Local vs. Global Models
**Local model:**
\[ y^* = \arg \max_{y \in Y} (g_{local}(y)) \]
- Efficient
- Inexpressive
**Global model:**
\[ y^* = \arg \max_{y \in Y} (g_{global}(y)) \]
- Intractable
- Expressive
This Work
Combined model:
\[ y^* = \underset{y \in Y}{\text{argmax}} \left( g_{\text{local}}(y) + g_{\text{global}}(y) \right) \]
Efficient
Expressive
A* Parsing
\[ y^* = \arg\max_{y \in Y} g(y) \]
- Search in the space of partial parses
- First explored full parse guaranteed to be optimal
Klein and Manning, 2003
A* Parsing
Fruit flies like bananas
Partial parse
Fruit flies like bananas
A* Parsing
Partial parse
A* Parsing
\[ f ??? ? \]
Fruit flies like bananas
Partial parse
Exploration priority
A* Parsing
Exploration priority
Inside score
Admissible A* heuristic
\[ f(\text{Fruit flies like bananas}) = g(\text{Fruit flies like bananas}) + h(\text{Fruit flies like bananas}) \]
A* Parsing
A* Parsing
### Agenda Position
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>$f(y)$</th>
<th>$y$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>4.5</td>
<td>bananas</td>
</tr>
<tr>
<td>2</td>
<td>3.1</td>
<td>like</td>
</tr>
<tr>
<td>3</td>
<td>1.9</td>
<td>Fruit</td>
</tr>
<tr>
<td>4</td>
<td>-0.5</td>
<td>Fruit</td>
</tr>
</tbody>
</table>
**A* Parsing**
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>$f(y)$</th>
<th>$y$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>4.5</td>
<td>bananas</td>
</tr>
<tr>
<td>2</td>
<td>3.1</td>
<td>like (NP/NP)</td>
</tr>
<tr>
<td>3</td>
<td>1.9</td>
<td>Fruit</td>
</tr>
<tr>
<td>4</td>
<td>-0.5</td>
<td>Fruit (NP/NP)</td>
</tr>
</tbody>
</table>
A* Parsing
**Agenda position** | **f(y)** | **y**
---|---|---
2 | 3.1 | like \((S\backslash NP)/NP\)
3 | 1.9 | Fruit \(NP\)
4 | -0.5 | Fruit \(NP/NP\)
A* Parsing
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>f(y)</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3.1</td>
<td>like (S\NP)/NP</td>
</tr>
<tr>
<td>2</td>
<td>1.9</td>
<td>Fruit NP</td>
</tr>
<tr>
<td>3</td>
<td>-0.5</td>
<td>Fruit NP/NP</td>
</tr>
<tr>
<td>4</td>
<td>-1.3</td>
<td>flies NP</td>
</tr>
</tbody>
</table>
Fruit flies like bananas
---
Fruit flies like bananas
A* Parsing
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>f(y)</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3.1</td>
<td>like (S(\text{NP}))/(\text{NP})</td>
</tr>
<tr>
<td>2</td>
<td>1.9</td>
<td>Fruit (\text{NP})</td>
</tr>
<tr>
<td>3</td>
<td>-0.5</td>
<td>Fruit (\text{NP}/\text{NP})</td>
</tr>
<tr>
<td>4</td>
<td>-1.3</td>
<td>flies (\text{NP})</td>
</tr>
</tbody>
</table>
A* Parsing
Agenda position | f(y) | y
--- | --- | ---
2 | 1.9 | Fruit \( NP \)
3 | -0.5 | Fruit \( NP/NP \)
4 | -1.3 | flies \( NP \)
A* Parsing
```
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>f(y)</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2.1</td>
<td>like (S\NP)/NP</td>
</tr>
<tr>
<td></td>
<td></td>
<td>bananas NP</td>
</tr>
<tr>
<td>2</td>
<td>1.9</td>
<td>Fruit NP</td>
</tr>
<tr>
<td>3</td>
<td>-0.5</td>
<td>Fruit NP/\NP</td>
</tr>
<tr>
<td>4</td>
<td>-1.3</td>
<td>flies \NP</td>
</tr>
</tbody>
</table>
```
A* Parsing
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>$f(y)$</th>
<th>$y$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2.1</td>
<td>like $(S\backslash NP)/NP$ bananas $NP$</td>
</tr>
<tr>
<td>2</td>
<td>1.9</td>
<td>Fruit $NP$</td>
</tr>
<tr>
<td>3</td>
<td>-0.5</td>
<td>Fruit $NP/NP$</td>
</tr>
<tr>
<td>4</td>
<td>-1.3</td>
<td>flies $NP$</td>
</tr>
</tbody>
</table>
A* Parsing
**Table:**
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>$f(y)$</th>
<th>$y$</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.9</td>
<td>Fruit $\overline{NP}$</td>
</tr>
<tr>
<td>2</td>
<td>-1.5</td>
<td>like $\overline{(S\backslash S)/NP}$</td>
</tr>
<tr>
<td>3</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>4</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Locally Factored Model
Supertag-factored A* CCG Parser (Lewis et al, 2016):
\[
\begin{align*}
\text{Fruit} & \quad \text{flies} & \quad \text{like} \quad \text{bananas} \\
NP/NP & \quad NP & \quad (S\backslash NP)/NP & \quad NP \\
& \quad NP & \quad S\backslash NP & \quad NP \\
& & \quad S & \quad \leftarrow
\end{align*}
\]
Locally Factored Model
Supertag-factored A* CCG Parser (Lewis et al, 2016):
\[ g_{local} \left( \frac{NP/NP}{NP/NP} \right) + g_{local} \left( \frac{NP}{NP} \right) + g_{local} \left( \frac{(S\backslash NP)/NP}{NP} \right) + g_{local} \left( \frac{NP}{NP} \right) \]
Locally Factored Model
Supertag-factored $A^*$ CCG Parser (Lewis et al, 2016):
\[
\begin{array}{ccc}
\text{Fruit} & \text{flies} & \text{like} & \text{bananas} \\
? & \frac{(S/NP)/NP}{NP} & \frac{NP}{S/NP} \\
\end{array}
\]
Locally Factored Model
Supertag-factored A* CCG Parser (Lewis et al, 2016):
\[
g_{local}(\text{Fruit flies like bananas}) = g\left(\frac{\text{like} \ (S\backslash NP)/NP}{NP}\right) + g\left(\frac{\text{bananas}}{NP}\right)
\]
Locally Factored Model
Supertag-factored A* CCG Parser (Lewis et al, 2016):
\[ g_{local}(?) : g\left(\frac{\text{like}}{(S\setminus NP)/NP}\right) + g\left(\frac{\text{bananas}}{NP}\right) \]
\[ h_{local}(?) : \max_{\text{tag}} g\left(\frac{\text{Fruit}}{\text{tag}}\right) + \max_{\text{tag}} g\left(\frac{\text{flies}}{\text{tag}}\right) \]
Global A* Parsing
$$y^* = \arg\max_{y \in Y} g(y)$$
- First explored full parse **guaranteed to be optimal**
- Global search graph is **exponential** in sentence length
- Open question: Can we still **learn to search** efficiently?
Modeling Global Structure
\[ g_{\text{global}}(y) : \]
\[ h_{\text{global}}(y) : \]
Modeling Global Structure
\[ g(y) = g_{\text{local}}(y) + g_{\text{global}}(y) \]
\[ h(y) = 0 \]
Any locally factored model with an admissible A* heuristic
Non-positive global model
\[ g(y) = g_{\text{local}}(y) + g_{\text{global}}(y) \]
\[ h(y) = h_{\text{local}}(y) + 0 \]
Division of Labor
\[ g(y) = g_{local}(y) + g_{global}(y) \]
- Limited expressivity
- Provides guidance with an A* heuristic
- Global expressivity
- Discriminative only when necessary
Global Model: $g_{global}(y)$
**Word embeddings**
**Bidirectional LSTM**
**Tree-LSTM**
**Parse Scores**
Diagram:
- **S**
- **NP**
- **NP/NP**
- **NP\NP**
- **NP/\NP**
- **NP**
Words:
- Fruit
- flies
- like
- bananas
Equation:
$$g_{global}(y)$$
Non-positive Global Model
\[ g_{global} = \log(\sigma(w \cdot \text{NP})) \]
Division of Labor
\[ g(y) = g_{local}(y) + g_{global}(y) \]
- Limited expressivity
- Provides guidance with an A* heuristic
- Global expressivity
- Discriminative only when necessary
Learning with A*
Learning with A*
Learning with A*
Learning with A*
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>$f(y)$</th>
<th>$y$</th>
<th>Is correct?</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.9</td>
<td>Fruit</td>
<td>✗</td>
</tr>
<tr>
<td>2</td>
<td>-0.5</td>
<td>Fruit</td>
<td>✓</td>
</tr>
<tr>
<td>3</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>4</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Violation-based Loss
$$A : \begin{bmatrix} \vdots & \cdots & \vdots \end{bmatrix}$$
Violation-based Loss
\[ L(A) = \sum_{t=1}^{T} \max_{y \in A_t} f(y) - \max_{y \in \text{GOLD}(A_t)} f(y) \]
\( A : [ \ldots \]
Jointly Optimizing Accuracy and Efficiency
Correct partial parse can still be predicted via backtracking
<table>
<thead>
<tr>
<th>Agenda position</th>
<th>$f(y)$</th>
<th>$y$</th>
<th>Is correct?</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.9</td>
<td>Fruit $\frac{NP}{NP}$</td>
<td>$\times$</td>
</tr>
<tr>
<td>2</td>
<td>-0.5</td>
<td>Fruit $\frac{NP}{NP}$</td>
<td>$\checkmark$</td>
</tr>
<tr>
<td>3</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>4</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Jointly Optimizing Accuracy and Efficiency
Explicitly optimize for search efficiency!
CCG Parsing Results
<table>
<thead>
<tr>
<th></th>
<th>Test F1 (%)</th>
<th>Is global?</th>
<th>Is exact?</th>
</tr>
</thead>
<tbody>
<tr>
<td>Clark & Curran (2007)</td>
<td>85.2</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Xu et al. (2015)</td>
<td>87.0</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Lewis et al. (2016)</td>
<td>88.1</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Vaswani et al. (2016)</td>
<td>88.3</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Global A*</td>
<td>88.7</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
CCG Parsing Results
- Optimal parse found for 99.9% of sentences
- Explores only 190 partial parses on average
<table>
<thead>
<tr>
<th>Is global?</th>
<th>✓</th>
<th>✓</th>
<th>✓</th>
</tr>
</thead>
<tbody>
<tr>
<td>Is exact?</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
Garden Paths
Incorrect partial parse (syntactically plausible in isolation):
Input sentence:
The favorite **U.S. small business is one** whose research and development can be milked for future Japanese use.
Heavily penalized by the global model
Towards Broad Coverage Semantic Parsing
• Can we crowdsource semantics?
• Train with latent syntax?
• Build fast and accurate parsers?
• Actively select which data to label?
Our key hypothesis:
Anyone who **understands the meaning of a sentence** should be able to correct **parser mistakes**.
Pat ate the cake on the table that I **baked** last night.
Parser: I **baked** table
Human understanding: I **baked** cake
Can we use human judgements to improve parse?
[He et al, 2016]
Pat ate the cake on the table that I baked last night.
Q: What did someone bake?
1. table 2. cake
Q: “What did someone bake?”
1) table 2) cake
Candidate dependencies from the n-best list:
baked → table
baked → cake
Re-parsed CCG Dependency Tree
C_pos (bake → cake)
C_neg (bake → table)
Not re-training the model
Generate Q/A Pairs from CCG Dependencies
Predicted CCG category of \textit{baked}: \((S\backslash NP_1)/NP_2\)
Convert to template: \begin{tabular}{c}
NP_1 \text{ bake} \text{ NP}_2 \\
\end{tabular}
Filling-in the Slots:
\begin{tabular}{c}
\textit{what} \text{ bake} \text{ sth.} \\
\end{tabular}
\begin{tabular}{c}
\textit{What} \text{ baked} \text{ something?} \\
\begin{tabular}{c}
\text{— I} \\
\end{tabular} \\
\end{tabular}
\begin{tabular}{c}
\textit{What} \text{ baked} \text{ something?} \\
\begin{tabular}{c}
\text{— I} \\
\end{tabular} \\
\end{tabular}
\begin{tabular}{c}
\textit{sth.} \text{ bake} \text{ what} \\
\end{tabular}
\begin{tabular}{c}
\textit{What} \text{ did} \text{ someone} \text{ bake?} \\
\begin{tabular}{c}
\text{— the table} \\
\end{tabular} \\
\end{tabular}
\begin{tabular}{c}
\textit{What} \text{ did} \text{ someone} \text{ bake?} \\
\begin{tabular}{c}
\text{— the cake} \\
\end{tabular} \\
\end{tabular}
Infer \textit{someone/something} and the \textit{answer spans} based on the n-best parses
Used “\textit{what}” for all questions
### Group Q/A Pairs into Queries
<table>
<thead>
<tr>
<th>Questions</th>
<th>Answers</th>
<th>Scores</th>
<th>Question Confidence</th>
<th>Answer Uncertainty (Entropy)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>What</strong> baked something?</td>
<td>I</td>
<td>1.0</td>
<td>1.0</td>
<td>0.0</td>
</tr>
<tr>
<td><strong>What</strong> did someone bake?</td>
<td>the table</td>
<td>0.7</td>
<td>1.0</td>
<td>0.88</td>
</tr>
<tr>
<td></td>
<td>the cake</td>
<td>0.3</td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>What</strong> was baked something</td>
<td>the table</td>
<td>0.1</td>
<td>0.1</td>
<td>0.0</td>
</tr>
<tr>
<td>something?</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
- The table shows questions, answers, scores, question confidence, and answer uncertainty (entropy).
- The question **What baked something?** has a high confidence and no uncertainty.
- The question **What did someone bake?** has a moderate confidence and a moderate uncertainty.
- The question **What was baked something something?** has a low confidence and no uncertainty.
- A non-sensical question mark is indicated.
- There is no uncertainty marked.
Our Annotation Task
Sentence:
Pat ate the cake on the table that I baked last night.
Question:
What did someone bake?
Check one or more
☐ the cake
☐ the table
☐ None of the above.
Comment:
* Crowdsourcing platform: https://www.crowdflower.com/.
Data Collection with Crowdsourcing
- All developments are done on CCG-Dev only.
- Less than 2 queries per sentence, for about 60% of the sentences.
- **Cost:** 46 cents per query.
- **Speed:** 200 queries per hour.
Inter-Annotator Agreement
- Agreement is computed only for matching the exact set of answers. i.e. (A, B) and (B) are considered disagreement.
- Unanimous agreement for over 40% of the queries.
- Over 90% absolute majority.
Putting our hypothesis to the test: How well does annotators’ human understanding align with the gold syntax?
- **Successes**: Long-range attachment decisions
- **Challenges**: Syntax-semantics mismatch
- **Use heuristics to fix the mismatch problems at re-parsing time.**
Temple also said Sea Containers’ plan raises numerous legal, regulatory, financial and fairness issues, but didn’t elaborate.
What didn’t elaborate something?
4 Temple
1 Sea Containers’ plan
0 None of the above.
Success - Coordination
To *avoid* these costs, and a possible default, immediate action is imperative.
What would something *avoid*?
4 **these costs**
3 **a possible default**
0 None of the above.
Kalipharma is a New Jersey-based pharmaceuticals concern that *sells* products under the Purepac label.
What *sells* something?
5 Kalipharma
None of the above.
- Syntax-semantics mismatch
- Also happens with pronouns and appositives.
- Some cases are heuristically fixed during reparsing.
Timex had requested duty-free treatment for many types of watches, covered by 58 different U.S. tariff classifications.
What would be covered?
0 Timex
0 duty-free treatment
0 None of the above.
2 many types of watches
3 watches
• Annotators tend to struggle with headedness.
• We add “disjunctive constraint”, forcing the re-parser to produce either of the two dependencies.
Re-Parsing with Crowdsourced Constraints
Q1: What did someone bake?
- votes(cake) = 4
- votes(table) = 1
- votes(None of the above) = 0
\[ y^{\text{new}} = \arg \max_y \text{base-parser-score}(y) \]
\[ -T^+ \times 1(baked \rightarrow \text{cake} \in y) \]
\[ -T^- \times 1(baked \rightarrow \text{table} \in y) \]
- Penalizes parses that disagree with crowdsourced judgments.
- Constraints are decomposed by dependencies.
- Thresholds and penalties are tuned on CCG-Dev.
Re-parsing Results (Labeled F1)
- Modest improvement due to syntax-semantics mismatch.
- Larger improvement on out-of-domain data.
Active, Ser133-phosphorylated CREB effects transcription of CRE-dependent genes via interaction with the 265-kDa …
Re-parsing Results
- Modified parse trees for about 10% of the sentences after incorporating human judgments.
- Larger gain on changed sentences.
- Changed sentences are “more difficult” on average.
Towards Broad Coverage Semantic Parsing
- Can we crowdsource semantics?
- Yes, but need more than verbs….
- Train with latent syntax?
- Yes, but must extend to QA supervision…
- Build fast and accurate parsers?
- Yes, but need to extend to latent-variable case…
- Actively select which data to label?
- Yes, but need to scale up…
Questions
|
{"Source-Url": "https://simons.berkeley.edu/sites/default/files/docs/6261/zettlemoyersimonsfeb2017.pdf", "len_cl100k_base": 8874, "olmocr-version": "0.1.50", "pdf-total-pages": 100, "total-fallback-pages": 0, "total-input-tokens": 176958, "total-output-tokens": 11742, "length": "2e13", "weborganizer": {"__label__adult": 0.0004780292510986328, "__label__art_design": 0.0011358261108398438, "__label__crime_law": 0.0006432533264160156, "__label__education_jobs": 0.006839752197265625, "__label__entertainment": 0.00030159950256347656, "__label__fashion_beauty": 0.0003027915954589844, "__label__finance_business": 0.0006937980651855469, "__label__food_dining": 0.0004699230194091797, "__label__games": 0.0012903213500976562, "__label__hardware": 0.0007014274597167969, "__label__health": 0.0005631446838378906, "__label__history": 0.00045418739318847656, "__label__home_hobbies": 0.0002007484436035156, "__label__industrial": 0.00046539306640625, "__label__literature": 0.001964569091796875, "__label__politics": 0.0003895759582519531, "__label__religion": 0.0006780624389648438, "__label__science_tech": 0.0765380859375, "__label__social_life": 0.00028324127197265625, "__label__software": 0.049041748046875, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.0003223419189453125, "__label__transportation": 0.00039839744567871094, "__label__travel": 0.0002548694610595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25112, 0.03325]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25112, 0.08667]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25112, 0.72035]], "google_gemma-3-12b-it_contains_pii": [[0, 128, false], [128, 221, null], [221, 584, null], [584, 937, null], [937, 1251, null], [1251, 1337, null], [1337, 1846, null], [1846, 2570, null], [2570, 2745, null], [2745, 3013, null], [3013, 3956, null], [3956, 4303, null], [4303, 4548, null], [4548, 4717, null], [4717, 4847, null], [4847, 5950, null], [5950, 6337, null], [6337, 6514, null], [6514, 6614, null], [6614, 6638, null], [6638, 7471, null], [7471, 7548, null], [7548, 7656, null], [7656, 7759, null], [7759, 7836, null], [7836, 7868, null], [7868, 8108, null], [8108, 8283, null], [8283, 8528, null], [8528, 8578, null], [8578, 8644, null], [8644, 8721, null], [8721, 8808, null], [8808, 9067, null], [9067, 9192, null], [9192, 9457, null], [9457, 9672, null], [9672, 9929, null], [9929, 10377, null], [10377, 10591, null], [10591, 10746, null], [10746, 10913, null], [10913, 10965, null], [10965, 11017, null], [11017, 11106, null], [11106, 11294, null], [11294, 11305, null], [11305, 11557, null], [11557, 11808, null], [11808, 11966, null], [11966, 12318, null], [12318, 12654, null], [12654, 12789, null], [12789, 13187, null], [13187, 13574, null], [13574, 13859, null], [13859, 14187, null], [14187, 14456, null], [14456, 14682, null], [14682, 14912, null], [14912, 15258, null], [15258, 15492, null], [15492, 15578, null], [15578, 15677, null], [15677, 15858, null], [15858, 16047, null], [16047, 16301, null], [16301, 16379, null], [16379, 16565, null], [16565, 16582, null], [16582, 16599, null], [16599, 16616, null], [16616, 16934, null], [16934, 17019, null], [17019, 17148, null], [17148, 17607, null], [17607, 17694, null], [17694, 18145, null], [18145, 18337, null], [18337, 18337, null], [18337, 18585, null], [18585, 18760, null], [18760, 19071, null], [19071, 19176, null], [19176, 19394, null], [19394, 20472, null], [20472, 21778, null], [21778, 22028, null], [22028, 22244, null], [22244, 22469, null], [22469, 22743, null], [22743, 22962, null], [22962, 23167, null], [23167, 23460, null], [23460, 23839, null], [23839, 24313, null], [24313, 24561, null], [24561, 24761, null], [24761, 25103, null], [25103, 25112, null]], "google_gemma-3-12b-it_is_public_document": [[0, 128, true], [128, 221, null], [221, 584, null], [584, 937, null], [937, 1251, null], [1251, 1337, null], [1337, 1846, null], [1846, 2570, null], [2570, 2745, null], [2745, 3013, null], [3013, 3956, null], [3956, 4303, null], [4303, 4548, null], [4548, 4717, null], [4717, 4847, null], [4847, 5950, null], [5950, 6337, null], [6337, 6514, null], [6514, 6614, null], [6614, 6638, null], [6638, 7471, null], [7471, 7548, null], [7548, 7656, null], [7656, 7759, null], [7759, 7836, null], [7836, 7868, null], [7868, 8108, null], [8108, 8283, null], [8283, 8528, null], [8528, 8578, null], [8578, 8644, null], [8644, 8721, null], [8721, 8808, null], [8808, 9067, null], [9067, 9192, null], [9192, 9457, null], [9457, 9672, null], [9672, 9929, null], [9929, 10377, null], [10377, 10591, null], [10591, 10746, null], [10746, 10913, null], [10913, 10965, null], [10965, 11017, null], [11017, 11106, null], [11106, 11294, null], [11294, 11305, null], [11305, 11557, null], [11557, 11808, null], [11808, 11966, null], [11966, 12318, null], [12318, 12654, null], [12654, 12789, null], [12789, 13187, null], [13187, 13574, null], [13574, 13859, null], [13859, 14187, null], [14187, 14456, null], [14456, 14682, null], [14682, 14912, null], [14912, 15258, null], [15258, 15492, null], [15492, 15578, null], [15578, 15677, null], [15677, 15858, null], [15858, 16047, null], [16047, 16301, null], [16301, 16379, null], [16379, 16565, null], [16565, 16582, null], [16582, 16599, null], [16599, 16616, null], [16616, 16934, null], [16934, 17019, null], [17019, 17148, null], [17148, 17607, null], [17607, 17694, null], [17694, 18145, null], [18145, 18337, null], [18337, 18337, null], [18337, 18585, null], [18585, 18760, null], [18760, 19071, null], [19071, 19176, null], [19176, 19394, null], [19394, 20472, null], [20472, 21778, null], [21778, 22028, null], [22028, 22244, null], [22244, 22469, null], [22469, 22743, null], [22743, 22962, null], [22962, 23167, null], [23167, 23460, null], [23460, 23839, null], [23839, 24313, null], [24313, 24561, null], [24561, 24761, null], [24761, 25103, null], [25103, 25112, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25112, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25112, null]], "pdf_page_numbers": [[0, 128, 1], [128, 221, 2], [221, 584, 3], [584, 937, 4], [937, 1251, 5], [1251, 1337, 6], [1337, 1846, 7], [1846, 2570, 8], [2570, 2745, 9], [2745, 3013, 10], [3013, 3956, 11], [3956, 4303, 12], [4303, 4548, 13], [4548, 4717, 14], [4717, 4847, 15], [4847, 5950, 16], [5950, 6337, 17], [6337, 6514, 18], [6514, 6614, 19], [6614, 6638, 20], [6638, 7471, 21], [7471, 7548, 22], [7548, 7656, 23], [7656, 7759, 24], [7759, 7836, 25], [7836, 7868, 26], [7868, 8108, 27], [8108, 8283, 28], [8283, 8528, 29], [8528, 8578, 30], [8578, 8644, 31], [8644, 8721, 32], [8721, 8808, 33], [8808, 9067, 34], [9067, 9192, 35], [9192, 9457, 36], [9457, 9672, 37], [9672, 9929, 38], [9929, 10377, 39], [10377, 10591, 40], [10591, 10746, 41], [10746, 10913, 42], [10913, 10965, 43], [10965, 11017, 44], [11017, 11106, 45], [11106, 11294, 46], [11294, 11305, 47], [11305, 11557, 48], [11557, 11808, 49], [11808, 11966, 50], [11966, 12318, 51], [12318, 12654, 52], [12654, 12789, 53], [12789, 13187, 54], [13187, 13574, 55], [13574, 13859, 56], [13859, 14187, 57], [14187, 14456, 58], [14456, 14682, 59], [14682, 14912, 60], [14912, 15258, 61], [15258, 15492, 62], [15492, 15578, 63], [15578, 15677, 64], [15677, 15858, 65], [15858, 16047, 66], [16047, 16301, 67], [16301, 16379, 68], [16379, 16565, 69], [16565, 16582, 70], [16582, 16599, 71], [16599, 16616, 72], [16616, 16934, 73], [16934, 17019, 74], [17019, 17148, 75], [17148, 17607, 76], [17607, 17694, 77], [17694, 18145, 78], [18145, 18337, 79], [18337, 18337, 80], [18337, 18585, 81], [18585, 18760, 82], [18760, 19071, 83], [19071, 19176, 84], [19176, 19394, 85], [19394, 20472, 86], [20472, 21778, 87], [21778, 22028, 88], [22028, 22244, 89], [22244, 22469, 90], [22469, 22743, 91], [22743, 22962, 92], [22962, 23167, 93], [23167, 23460, 94], [23460, 23839, 95], [23839, 24313, 96], [24313, 24561, 97], [24561, 24761, 98], [24761, 25103, 99], [25103, 25112, 100]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25112, 0.13549]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
96555ca7eb676cadee9ac34e09187a0b9355efe0
|
This document specifies a minimal subset of the Encrypted Session Negotiation protocol sufficient for negotiating an end-to-end encrypted session.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
<table>
<thead>
<tr>
<th>Contents</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1 Introduction</td>
<td>1</td>
</tr>
<tr>
<td>2 Dramatis Personae</td>
<td>1</td>
</tr>
<tr>
<td>3 Discovering Support</td>
<td>2</td>
</tr>
<tr>
<td>4 Online ESession Negotiation</td>
<td>3</td>
</tr>
<tr>
<td>4.1 ESession Request (Alice)</td>
<td>3</td>
</tr>
<tr>
<td>4.2 ESession Rejection (Bob)</td>
<td>6</td>
</tr>
<tr>
<td>4.3 ESession Response (Bob)</td>
<td>7</td>
</tr>
<tr>
<td>4.3.1 Diffie-Hellman Preparation (Bob)</td>
<td>7</td>
</tr>
<tr>
<td>4.3.2 Response Form</td>
<td>7</td>
</tr>
<tr>
<td>4.4 ESession Accept (Alice)</td>
<td>9</td>
</tr>
<tr>
<td>4.4.1 Diffie-Hellman Preparation (Alice)</td>
<td>9</td>
</tr>
<tr>
<td>4.4.2 Generating Session Keys</td>
<td>9</td>
</tr>
<tr>
<td>4.4.3 Hiding Alice’s Identity</td>
<td>10</td>
</tr>
<tr>
<td>4.4.4 Sending Alice’s Identity</td>
<td>11</td>
</tr>
<tr>
<td>4.5 ESession Accept (Bob)</td>
<td>12</td>
</tr>
<tr>
<td>4.5.1 Generating Provisory Session Keys (Bob)</td>
<td>12</td>
</tr>
<tr>
<td>4.5.2 Verifying Alice’s Identity</td>
<td>12</td>
</tr>
<tr>
<td>4.5.3 Short Authentication String</td>
<td>13</td>
</tr>
<tr>
<td>4.5.4 Generating Bob’s Final Session Keys</td>
<td>13</td>
</tr>
<tr>
<td>4.5.5 Sending Bob’s Identity</td>
<td>14</td>
</tr>
<tr>
<td>4.6 Final Steps (Alice)</td>
<td>15</td>
</tr>
<tr>
<td>4.6.1 Generating Alice’s Final Session Keys</td>
<td>15</td>
</tr>
<tr>
<td>4.6.2 Verifying Bob’s Identity</td>
<td>16</td>
</tr>
<tr>
<td>5 ESession Termination</td>
<td>16</td>
</tr>
<tr>
<td>6 Implementation Notes</td>
<td>17</td>
</tr>
<tr>
<td>6.1 Multiple-Precision Integers</td>
<td>17</td>
</tr>
<tr>
<td>6.2 XML Normalization</td>
<td>18</td>
</tr>
<tr>
<td>7 Security Considerations</td>
<td>18</td>
</tr>
<tr>
<td>7.1 Random Numbers</td>
<td>18</td>
</tr>
<tr>
<td>7.2 Replay Attacks</td>
<td>18</td>
</tr>
<tr>
<td>7.3 Unverified SAS</td>
<td>19</td>
</tr>
<tr>
<td>7.4 Back Doors</td>
<td>19</td>
</tr>
<tr>
<td>7.5 Extra Responsibilities of Implementors</td>
<td>19</td>
</tr>
<tr>
<td>8 Mandatory to Implement Technologies</td>
<td>20</td>
</tr>
<tr>
<td>9 The sas28x5 SAS Algorithm</td>
<td>20</td>
</tr>
</tbody>
</table>
10 IANA Considerations
11 XMPP Registrar Considerations
1 Introduction
Encrypted Session Negotiation (XEP-0116)¹ is a fully-fledged protocol that supports multiple
different end-to-end encryption functionalities and scenarios. The protocol is as simple as
possible given its feature set. However, the work involved to implement it may be reduced
by removing support for several of the optional features, including alternative algorithms,
3-message exchange, public keys, repudiation and key re-exchange.
The minimal subset of the protocol defined in this document is designed to be relatively simple
to implement while offering full compatibility with implementations of the fully-fledged
protocol. The existence of this subset enables developers to produce working code before
they have finished implementing the full protocol.
The requirements and the consequent cryptographic design that underpin this protocol are
described in Requirements for Encrypted Sessions (XEP-0210)² and Cryptographic Design of
Encrypted Sessions (XEP-0188)³. The basic concept is that of an encrypted session which acts
as a secure tunnel between two endpoints. The protocol specified in Stanza Session Negotia-
tion (XEP-0155)⁴ and in this document is used to negotiate the encryption keys and establish
the tunnel. Thereafter the content of each one-to-one XML stanza exchanged between the
endpoints during the session will be encrypted and transmitted within a “wrapper” stanza
using Stanza Encryption (XEP-0200)⁵.
The cut-down protocol described here is a 4-message key exchange (see useful summary
of 4-message negotiation) with short-authentication-string (SAS), hash commitment and
optional retained secrets. It avoids using public keys - thus protecting the identity of both
participants against active attacks from third parties.
Note: This protocol requires that both entities are online. An entity MAY use the protocol
specified in Offline Encrypted Sessions (XEP-0187)⁶ if it believes the other entity is offline.
2 Dramatis Personae
This document introduces two characters to help the reader follow the necessary exchanges:
1. "Alice" is the name of the initiator of the ESession. Within the scope of this document,
we stipulate that her fully-qualified JID is: <alice@example.org/pda>.
2. "Bob" is the name of the other participant in the ESession started by Alice. Within the
scope of this document, his fully-qualified JID is: <bob@example.com/laptop>.
3. "Aunt Tillie" the archetypal typical user (i.e. non-technical, with only very limited knowledge of how to use a computer, and averse to performing any procedures that are not familiar).
While Alice and Bob are introduced as "end users", they are simply meant to be examples of XMPP entities. Any directly addressable XMPP entity may participate in an ESession.
3 Discovering Support
Before attempting to engage in an ESession with Bob, Alice MAY discover whether he supports this protocol, using either Service Discovery (XEP-0030) or the presence-based profile of XEP-0030 specified in Entity Capabilities (XEP-0115).
The disco#info request sent from Alice to Bob might look as follows:
Listing 1: Alice Queries Bob for ESession Support via Disco
```xml
<iq type='get'
from='alice@example.org/pda'
to='bob@example.com/laptop'
id='disco1'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
If Bob sends a disco#info reply and he supports the protocol defined herein, then he MUST include a service discovery feature variable of "http://www.xmpp.org/extensions/xep-0116.html#ns".
Listing 2: Bob Returns disco#info Data
```xml
<iq type='result'
from='bob@example.com/laptop'
to='alice@example.org/pda'
id='disco1'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='client' type='pc'/>
...
<feature var='http://www.xmpp.org/extensions/xep-0116.html#ns'/>
</query>
</iq>
```
---
4 Online ESession Negotiation
4.1 ESession Request (Alice)
In addition to the "accept", "security", "otr" and "disclosure" fields (see Back Doors) specified in Stanza Session Negotiation, Alice MUST send to Bob each of the ESession options (see list below) that she is willing to use.
- The list of Modular Exponential (MODP) group numbers (as specified in RFC 2409 or RFC 3526) that MAY be used for Diffie-Hellman key exchange in a "modp" field (valid group numbers include 1, 2, 3, 4, 5, 14, 15, 16, 17 and 18).
- The list of stanza types that MAY be encrypted and decrypted in a "stanzas" field (message, presence, iq)
- The different versions of the Encrypted Session Negotiation protocol that are supported in a "ver" field
Each MODP group has at least two well known constants: a large prime number p, and a generator g for a subgroup of GF(p). For each MODP group that Alice specifies she MUST perform the following computations to calculate her Diffie-Hellman keys (where n is 128 - i.e. the number of bits per cipher block for the AES-128 block cipher algorithm):
1. Generate: a secret random number x (where $2^{2n-1} < x < p - 1$)
2. Calculate: $e = g^x \mod p$
3. Calculate: $H_e = SHA256(e)$ (see SHA)
Alice MUST send all her calculated values of 'He' to Bob (in a "dhhashes" field in the same order as the associated MODP groups are being sent) Base64 encoded (in accordance with Section 4 of RFC 4648). She MUST also specify a randomly generated Base64 encoded value of NA (her ESession ID in a "my_nonce" field).
---
11 Entities SHOULD offer even the lowest MODP groups since some entities are CPU-constrained, and security experts tend to agree that "longer keys do not protect against the most realistic security threats".
The form SHOULD NOT include a "sign_algs" field. However, to ensure compatibility with entities that support the full Encrypted Session Negotiation protocol, the form SHOULD include the following fixed values in hidden fields:
<table>
<thead>
<tr>
<th>Field</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>crypt_algs</td>
<td>"aes128-ctr"</td>
</tr>
<tr>
<td>hash_algs</td>
<td>"sha256"</td>
</tr>
<tr>
<td>compress</td>
<td>"none"</td>
</tr>
<tr>
<td>init_pubkey</td>
<td>"none"</td>
</tr>
<tr>
<td>resp_pubkey</td>
<td>"none"</td>
</tr>
<tr>
<td>rekey_freq</td>
<td>"4294967295"</td>
</tr>
<tr>
<td>sas_algs</td>
<td>"sas28x5"</td>
</tr>
</tbody>
</table>
The options in each field MUST appear in Alice’s order of preference.
Listing 3: Initiates a 4-message ESession Negotiation
```xml
<message from='alice@example.org/pda' to='bob@example.com'>
<thread>ffd7076498744578d10edabfe7f4a866</thread>
<feature xmlns='http://jabber.org/protocol/feature-neg'>
<field type='form' xmlns='jabber:x:data'>
<value>urn:xmpp:ssn</value>
</field>
<field type='boolean' var='accept'>
<value>1</value>
<required/>
</field>
<field type='list-single' var='otr'>
<option><value>false</value></option>
<option><value>true</value></option>
<required/>
</field>
<field type='list-single' var='disclosure'>
<option><value>never</value></option>
<required/>
</field>
<field type='list-single' var='security'>
<option><value>e2e</value></option>
<option><value>c2s</value></option>
<required/>
</field>
<field type='list-single' var='modp'>
<option><value>5</value></option>
</field>
</feature>
</message>
```
<option><value>14</value></option>
<option><value>2</value></option>
<option><value>1</value></option>
</field>
<Field type='hidden' var='crypt_algs'>
<value>aes128-ctr</value>
</field>
<Field type='hidden' var='hash_algs'>
<value>sha256</value>
</field>
<Field type='hidden' var='compress'>
<value>none</value>
</field>
<Field type='list-multi' var='stanzas'>
<option><value>message</value></option>
<option><value>iq</value></option>
<option><value>presence</value></option>
</field>
<Field type='hidden' var='init_pubkey'>
<value>none</value>
</field>
<Field type='hidden' var='resp_pubkey'>
<value>none</value>
</field>
<Field type='list-single' var='ver'>
<option><value>1.3</value></option>
<option><value>1.2</value></option>
</field>
<Field type='hidden' var='rekey_freq'>
<value>4294967295</value>
</field>
<Field type='hidden' var='my_nonce'>
<value>** Alice's_Base64_encoded_ESession_ID**</value>
</field>
<Field type='hidden' var='sas_algs'>
<value>sas28x5</value>
</field>
<Field type='hidden' var='dhhashes'>
<value>** Base64_encoded_value_of_He5**</value>
</field>
<Field type='hidden' var='crypt_algs'>
<value>** Base64_encoded_value_of_He1**</value>
</field>
</message>
4.2 ESession Rejection (Bob)
If Bob does not want to reveal presence to Alice for whatever reason then Bob SHOULD return no response or error.
If Bob finds that one or more of the fields (other than the "rekey_freq" field) listed in the Fixed Parameters table (see ESession Request) does not include the value included in the table (or an <option/> element containing the value), or if Bob supports none of the options for one or more of the negotiable ESession fields ("modp", "stanzas", "ver"), then he SHOULD also return a <not-acceptable/> error specifying the field(s) with unsupported options:
Listing 4: Bob Informs Alice that Her Options are Not Supported
```
<message type='error' from='bob@example.com/laptop' to='alice@example.org/pda'>
<thread>ffd7076498744578d10edabfe7f4a866</thread>
<feature xmlns='http://jabber.org/protocol/feature-neg'>
...
</feature>
<error type='cancel'>
<not-acceptable xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<feature xmlns='http://jabber.org/protocol/feature-neg'>
<field var='modp'/>
<field var='ver'/>
</feature>
</error>
</message>
```
Either Bob or Alice MAY attempt to initiate a new ESession after any error during the negotiation process. However, both MUST consider the previous negotiation to have failed and MUST discard any information learned through the previous negotiation.
If Bob is unwilling to start an ESession, but he is ready to initiate a one-to-one stanza session with Alice (see Stanza Session Negotiation), and if Alice included an option for the "security" field with the value "none" or "c2s", then Bob SHOULD accept the stanza session and terminate the ESession negotiation by specifying "none" or "c2s" for the value of the "security" field in his response.
Listing 5: Bob Accepts Stanza Session
```
<message from='bob@example.com/laptop' to='alice@example.org/pda'>
<thread>ffd7076498744578d10edabfe7f4a866</thread>
<feature xmlns='http://jabber.org/protocol/feature-neg'>
<x type='submit' xmlns='jabber:x:data'>
<field var='FORM_TYPE'>
<value>urn:xmpp:ssn</value>
</field>
<field var='accept'><value>1</value></field>
<field var='otr'><value>true</value></field>
</x>
</feature>
</message>
```
4.3 ESession Response (Bob)
4.3.1 Diffie-Hellman Preparation (Bob)
If Bob supports one or more of each of Alice's ESession options and is willing to start an ESession with Alice, then he MUST select one of the options from each of the negotiable ESession fields ("modp", "stanzas", "ver") he received from Alice, including one of the MODP groups and Alice's corresponding value of 'He'. Note: MODP group 14, with its 2048-bit modulus, could be considered a good match for AES-128, however CPU-constrained implementations MAY select a smaller group.
Note: Each MODP group has at least two well known constants: a large prime number p, and a generator g for a subgroup of GF(p).
Bob MUST then perform the following computations (where n is 128, the number of bits per cipher block for AES-128):
1. Generate a random number NB (his ESession ID)
2. Generate an n-bit random number CA (the block cipher counter for stanzas sent from Alice to Bob)
3. Set CB = CA XOR 2^n - 1 (where CB is the block counter for stanzas sent from Bob to Alice)
4. Generate a secret random number y (where 2^{2n-1} < y < p - 1)
5. Calculate d = g^y mod p
4.3.2 Response Form
Bob SHOULD generate the form that he will send back to Alice, including his responses for all the fields Alice sent him except that he MUST NOT include a 'dhhashes' field. The form SHOULD include the fields and associated values listed in the Fixed Parameters table (see ESession Request).
He MUST place his Base64 encoded values of NB and d in the 'my_nonce' and 'dhkeys' fields. Note: Bob MUST NOT return Alice's value of NA in the 'my_nonce' field.
Bob MUST encapsulate the Base64 encoded values of CA and Alice’s NA in two new ‘counter’ and ‘nonce’ fields and append them to the form.
Bob SHOULD respond to Alice by sending her the form (formB).
Listing 6: Bob Responds to Alice
```xml
<message from='bob@example.com/laptop' to='alice@example.org/pda'>
<thread>ffd7076498744578d10edabfe7f4a866</thread>
<feature xmlns='http://jabber.org/protocol/feature-neg'>
<x type='submit' xmlns='jabber:x:data'>
<field var='FORM_TYPE'>
<value>urn:xmpp:ssn</value>
</field>
<field var='accept'><value>true</value></field>
<field var='otr'><value>true</value></field>
<field var='disclosure'><value>never</value></field>
<field var='security'><value>e2e</value></field>
<field var='modp'><value>5</value></field>
<field var='crypt_algs'><value>aes128-ctr</value></field>
<field var='hash_algs'><value>sha256</value></field>
<field var='compress'><value>true</value></field>
<field var='stanzas'><value>message</value></field>
<field var='init_pubkey'><value>none</value></field>
<field var='resp_pubkey'><value>none</value></field>
<field var='ver'><value>1.3</value></field>
<field var='rekey_freq'><value>4294967295</value></field>
<field var='my_nonce'>** Bob's Base64-encoded ESession ID **</field>
<field var='sas_algs'><value>sas28x5</value></field>
<field var='dhkeys'>** Base64-encoded value of d **</field>
<field var='nonce'>** Bob's nonce **</field>
<field var='counter'>** Alice's Base64 encoded ESession ID **</field>
<field var='counter'>** Base64 encoded block counter **</field>
</x>
</feature></message>
```
4.4 ESession Accept (Alice)
4.4.1 Diffie-Hellman Preparation (Alice)
After Alice receives Bob's response, she MUST use the value of d and the ESession options specified in Bob's response to perform the following steps (where p and g are the constants associated with the selected MODP group, and n is 128 - the number of bits per cipher block):
1. Verify that the ESession options selected by Bob are acceptable
2. Return a <not-acceptable/> error to Bob unless: 1 < d < p - 1
3. Set CB = CA XOR 2^{n-1} (where CB is the block counter for stanzas sent from Bob to Alice)
4. Select her values of x and e that correspond to the selected MODP group (from all the values of x and e she calculated previously - see ESession Request)
5. Calculate K = SHA256(d^x mod p) (the shared secret)
6. Generate provisory session keys only for the messages Alice sends to Bob (KCA, KMA, KSA) - see the next section, Generating Session Keys.
4.4.2 Generating Session Keys
Alice MUST use HMAC with SHA256 and the shared secret ("K") to generate two sets of three keys, one set for each direction of the ESession.
For stanzas that Alice will send to Bob, the keys are calculated as:
1. Encryption key KCA = HMAC(SHA256, K, "Initiator Cipher Key")
2. Integrity key KMA = HMAC(SHA256, K, "Initiator MAC Key")
3. SIGMA key KSA = HMAC(SHA256, K, "Initiator SIGMA Key")
For stanzas that Bob will send to Alice the keys are calculated as:
1. Encryption key KCB = \textit{HMAC}(SHA256, K, "Responder Cipher Key")
2. Integrity key KMB = \textit{HMAC}(SHA256, K, "Responder MAC Key")
3. SIGMA key KSB = \textit{HMAC}(SHA256, K, "Responder SIGMA Key")
Note: Only the 128 least significant bits of the HMAC output must be used for each key. Once the sets of keys have been calculated the value of K MUST be securely destroyed, unless it will be used later to generate the final shared secret (see Generating Bob’s Final Session Keys).
4.4.3 Hiding Alice’s Identity
Alice MUST perform the following steps before she can prove her identity to Bob while protecting it from third parties.
1. Set formA to be the full Normalized content of the ESession Request data form that Alice sent to Bob at the start of the negotiation.
2. Set formA2 to be the full Normalized content of Alice’s session negotiation completion form excluding the ‘identity’ and ‘mac’ fields (see Sending Alice’s Identity below).
3. Concatenate Bob’s ESession ID, Alice’s ESession ID, e, formA and formA2, and calculate the HMAC of the resulting byte string using SHA256 and the key KSA.
\[ \text{macA} = \textit{HMAC}(\text{SHA256}, \text{KSA}, \{\text{NB}, \text{NA}, e, \text{formA}, \text{formA2}\}) \]
4. Encrypt the HMAC result with AES-128 in counter mode (see Recommendation for Block Cipher Modes of Operation \(^{14}\)), using the encryption key KCA and block counter CA. Note: CA MUST be incremented by 1 for each encrypted block or partial block (i.e. \(CA = (CA + 1) \mod 2^n\), where \(n\) is 128 - the number of bits per cipher block).
\[ \text{IDA} = \textit{CIPHER}(\text{KCA}, \text{CA}, \text{macA}) \]
5. Calculate the HMAC of the encrypted identity (IDA) and the value of Bob’s block cipher counter CA before the encryption above using SHA256 and the integrity key KMA.
**MA** = HMAC(SHA256, KMA, CA, IDA)
### 4.4.4 Sending Alice’s Identity
Alice MUST send the Base64 encoded values of NB (wrapped in a 'nonce' field), IDA (wrapped in an 'identity' field) and MA (wrapped in a 'mac' field) to Bob in her session negotiation completion message. Alice MUST also include in the data form her Base64 encoded values of e (wrapped in a 'dhkeys' field) and the Base64 encoded HMAC (using SHA256 and the key NA \(^{15}\)) of each secret (if any) that Alice has retained from her previous session with each of Bob’s clients (wrapped in a 'rshashes' field) - see Sending Bob’s Identity. Note: Alice MUST also append a few random numbers to the 'rshashes' field to make it difficult for an active attacker to discover if she has communicated with Bob before or how many clients Bob has used to communicate with her.
**Listing 7: Alice Sends Bob Her Identity**
```xml
<message from='alice@example.org/pda' to='bob@example.com/laptop'>
<thread id='ff7076498744578d10edabe7f4a866'/>
<feature xmlns='http://jabber.org/protocol/feature-neg'>
<field var='FORM_TYPE'>urn:xmpp:ssn</field>
<field var='accept'>1</field>
<field var='nonce'>** Bob's_Base64_encoded_ESession_ID**</field>
<field var='hidden'>dhkeys</field>
<field var='hidden'>rshashes</field>
<field var='hidden'>Base64_encoded_value_of_e5</field>
<field var='hidden'>Base64_encoded_hash_of_retained_secret</field>
<field var='hidden'>Base64_encoded_random_value</field>
<field var='identity'>**Encrypted_identity**</field>
<field var='mac'>**Integrity_of_identity**</field>
</feature>
</message>
```
\(^{15}\) The HMACs of the retained secrets are generated using Alice’s unique session nonce to prevent her being identified by her retained secrets (only one secret changes each session, and some might not change very often).
4.5 ESession Accept (Bob)
4.5.1 Generating Provisory Session Keys (Bob)
Bob MUST perform the following four steps:
1. Return a `<feature-not-implemented/>` error unless SHA256(e) equals 'He', the value he received from Alice in her original session request.
2. Return a `<feature-not-implemented/>` error unless: 1 < e < p - 1
3. Use the value of e he received from Alice, his secret value of y and their agreed value of p to calculate the value of the Diffie-Hellman shared secret:
\[ K = \text{SHA256}(e^y \mod p) \]
4. Generate Alice's provisory session keys (KCA, KMA, KSA) in exactly the same way as specified in the Generating Session Keys section.
4.5.2 Verifying Alice's Identity
Bob MUST also perform the following steps:
1. Calculate the HMAC of the encrypted identity (IDA) and the value of Alice's block cipher counter using SHA256 and the integrity key KMA.
\[ MA = \text{HMAC}(\text{SHA256}, \text{KMA}, \text{CA}, \text{IDA}) \]
2. Return a `<feature-not-implemented/>` error to Alice unless the value of MA he calculated matches the one he received in the 'mac' field
3. Obtain macB by decrypting IDA with the AES-128 symmetric block cipher algorithm ("DECIPHER") in counter mode, using the encryption key KCA and block counter CA. Note: CA MUST be incremented by 1 for each encrypted block or partial block (i.e. CA = (CA + 1) mod 2^n, where n is 128 - the number of bits per cipher block).
\[ \text{macA} = \text{DECIPHER(KCA, CA, IDA)} \]
4. Set the value of formA to be the full Normalized content of the ESession Request data form that Alice sent to Bob at the start of the negotiation.
5. Set the value of formA2 to be the full Normalized content of Alice's session negotiation completion form excluding the 'identity' and 'mac' fields (see Sending Alice's Identity).
6. Concatenate Bob's ESession ID, Alice's ESession ID, e, formA and formA2, and calculate the HMAC of the resulting byte string using SHA256 and the key KSA.
\[
\text{macA} = \text{HMAC} (\text{SHA256}, \text{KSA}, \{\text{NB}, \text{NA}, e, \text{formA}, \text{formA2}\})
\]
7. Return a <feature-not-implemented/> error to Alice if the two values of macA he calculated in the steps above do not match.
### 4.5.3 Short Authentication String
Bob and Alice MAY confirm out-of-band that the Short Authentication Strings (SAS) their clients generate for them (using the SAS generation algorithm that they agreed on) are the same. This out-of-band step MAY be performed at any time. However, they SHOULD confirm out-of-band that their SAS match as soon as they realise that the two clients have no retained secret in common (see Generating Bob's Final Session Keys below, or Generating Alice's Final Session Keys). However, if it is inconvenient for Bob and Alice to confirm the match immediately, both clients MAY remember (in a secure way) that a SAS match has not yet been confirmed and remind Bob and Alice at the start of each ESession that they should confirm the SAS match (even if they have a retained secret in common). Their clients should continue to remind them until they either confirm a SAS match, or indicate that security is not important enough for them to bother.
### 4.5.4 Generating Bob’s Final Session Keys
Bob MUST identify the shared retained secret (SRS) by selecting from his client’s list of the secrets it retained (if any) from previous sessions with Alice’s clients (i.e., secrets from sessions where the bareJID was the same as the one Alice is currently using). Note: The list contains the most recent shared secret for each of Alice’s clients that she has previously used to negotiate ESessions with the client Bob is currently using.
Bob does this by calculating the HMAC (using SHA256 and the key NA) of each secret in the list in turn and comparing it with each of the values in the 'rshashes' field he received from Alice (see Sending Alice’s Identity). Once he finds a match, and has confirmed that the secret has not expired (because it is older than an implementation-defined period of time), then he has found the SRS.
If Bob cannot find a match, then he SHOULD search through all the retained secrets that have not expired (if any) for all the other JIDs his client has communicated with to try to find a match with one of the values in the 'rshashes' field he received from Alice (since she may simply be using a different JID, perhaps in order to protect her identity from third
4. ONLINE ESESSION NEGOTIATION
Once he finds a match then he has found the SRS. Note: Resource-constrained implementations MAY make the performance of this second extended search an optional feature.
Bob MUST calculate the final session key by appending to K (the Diffie-Hellman shared secret) the SRS (only if one was found) and then the Other Shared Secret (only if one exists) and then setting K to be the SHA256 result of the concatenated string of bytes:
\[ K = \text{SHA256}(K \mid \text{SRS} \mid \text{OSS}) \]
Bob MUST now use the new value of K to generate the new session keys (KCA, KMA, KCB, KMB and KSB) - see Generating Session Keys. These keys will be used to exchange encrypted stanzas. Note: Bob will still need the value of K in the next section.
4.5.5 Sending Bob’s Identity
Bob MUST now prove his identity to Alice while protecting it from third parties. He MUST perform the steps equivalent to those Alice performed above (see Hiding Alice’s Identity for a more detailed description). Bob’s calculations are summarised below. Note: When calculating macB pay attention to the order of NA and NB.
Note: formB is the full Normalized content of the response data form he generated above (see Response Form), and formB2 is the full Normalized content of Bob’s session negotiation completion form excluding the ‘identity’ and ‘mac’ fields (see below).
\[ \text{macB} = \text{HMAC}(\text{SHA256}, \text{KSB}, \{\text{NA}, \text{NB}, \text{d}, \text{formB}, \text{formB2}\}) \]
\[ \text{IDB} = \text{CIPHER}(\text{KCB}, \text{CB}, \text{macB}) \]
\[ \text{MB} = \text{HMAC}(\text{SHA256}, \text{KMB}, \text{CB}, \text{IDB}) \]
Bob MUST send Alice the Base64 encoded value of the HMAC (using SHA256 and the key SRS) of the string "Shared Retained Secret" (wrapped in an ‘srshash’ field). If no SRS was found then he MUST use a random number instead.
\[ \text{HMAC}(\text{SHA256}, \text{SRS}, \"\text{Shared\_Retained\_Secret}\") \]
Bob MUST also include in the data form the Base64 encoded values of NA, and IDB and MB (that he just calculated). Note: He MAY also send encrypted content (see Stanza Encryption) in the same stanza.
\[ ^{16} \text{Bob always sends a value in the ‘srshash’ field to prevent an attacker learning that the session is not protected by a retained secret.} \]
4 ONLINE ESESSION NEGOTIATION
Listing 8: Bob Sends Alice His Identity
```xml
<message from='bob@example.com/laptop' to='alice@example.org/pda'>
<thread>ffd70764984578d10edabfe7f4a866</thread>
<init xmlns='http://www.xmpp.org/extensions/xep-0116.html#ns-init'>
<x type='result' xmlns='jabber:x:data'>
<field var='FORM_TYPE'><value>urn:xmpp:ssn</value></field>
<field var='nonce'><value>** Alice's Base64_encoded_ESession_ID **</value></field>
<field var='srshash'><value>** HMAC_with_shared_retained_secret **</value></field>
<field var='identity'><value>** Encrypted_identity **</value></field>
<field var='mac'><value>** Integrity_of_identity **</value></field>
</x>
</init>
<c xmlns='http://www.xmpp.org/extensions/xep-0200.html#ns'>
<data><value Base64_encoded_m_final***</value></data>
<mac><value Base64_encoded_a_mac***</value></mac>
</c>
</message>
```
Finally, Bob MUST destroy all his copies of the old retained secret (SRS) he was keeping for Alice’s client, and calculate a new retained secret for this session:
```
HMAC(SHA256, K, "New_Retained_Secret")
```
Bob MUST securely store the new value along with the retained secrets his client shares with Alice’s other clients.
Bob’s value of K MUST now be securely destroyed.
4.6 Final Steps (Alice)
4.6.1 Generating Alice’s Final Session Keys
Alice MUST identify the shared retained secret (SRS) by selecting from her client’s list of the secrets it retained from sessions with Bob’s clients (the most recent secret for each of the clients he has used to negotiate ESessions with Alice’s client).
Alice does this by using each secret in the list in turn as the key to calculate the HMAC (with SHA256) of the string “Shared Retained Secret”, and comparing the calculated value with the value in the ‘srshash’ field she received from Bob (see Sending Bob’s Identity). Once she finds a match, and has confirmed that the secret has not expired (because it is older than an implementation-defined period of time), then she has found the SRS.
Alice MUST calculate the final session key by appending to K (the Diffie-Hellman shared secret) the SRS (only if one was found) and then the Other Shared Secret (only if one exists).
and then setting $K$ to be the SHA256 result of the concatenated string of bytes:
$$K = \text{SHA256}(K \mid \text{SRS} \mid \text{OSS})$$
Alice **MUST** destroy all her copies of the old retained secret (SRS) she was keeping for Bob’s client, and calculate a new retained secret for this session:
$$\text{HMAC} (\text{SHA256}, K, "\text{New\_Retained\_Secret"})$$
Alice **MUST** securely store the new value along with the retained secrets her client shares with Bob’s other clients.
Alice **MUST** now use the new value of $K$ to generate the new session keys ($KCA$, $KMA$, $KCB$, $KMB$ and $KSB$) in exactly the same way as Bob did (see **Generating Session Keys**). These keys will be used to exchange encrypted stanzas.
### 4.6.2 Verifying Bob’s Identity
Finally, Alice **MUST** verify the identity she received from Bob. She does this by performing steps equivalent to those performed by Bob above (see **Verifying Alice’s Identity** for a more detailed description).
Alice’s calculations are summarised below. Note: formB is the full Normalized content of the initial response data form Alice received from Bob (see **Response Form**), and formB2 is the full Normalized content of the session negotiation completion form she received from Bob excluding the ‘identity’ and ‘mac’ fields (see **Sending Bob’s Identity**). Note: When calculating macB pay attention to the order of NA and NB.
$$MB = \text{HMAC}(\text{SHA256}, KMB, CB, IDB)$$
$$macB = \text{DECIPHER}(KCB, CB, IDB)$$
$$macB = \text{HMAC}(\text{SHA256}, KSB, \{NA, NB, d, formB, formB2\})$$
Note: If Alice discovers an error then she **SHOULD** ignore any encrypted content she received in the stanza.
Once ESession negotiation is complete, Alice and Bob **MUST** exchange only encrypted forms of the one-to-one stanza types they agreed upon (e.g., `<message/>` and `<iq/>` stanzas) within the session.
### 5 ESession Termination
Either entity **MAY** terminate an ESession at any time. Entities **MUST** terminate all open ESessions before they go offline. To terminate an ESession Alice **MUST** send an encrypted stanza.
(see Stanza Encryption) to Bob including within the encrypted XML of the <data/> element a stanza session negotiation form with a "terminate" field (as specified in the Termination section of Stanza Session Negotiation). She MUST then securely destroy all keys associated with the ESession.
Listing 9: Alice Terminates an ESession
When Bob receives a termination stanza he MUST verify the MAC (to be sure he received all the stanzas Alice sent him during the ESession) and immediately send an encrypted termination acknowledgement form (as specified in the Termination section of Stanza Session Negotiation) back to Alice. He MUST then securely destroy all keys associated with the ESession.
Listing 10: Bob Acknowledges ESession Termination
When Alice receives the stanza she MUST verify the MAC to be sure she received all the stanzas Bob sent her during the ESession. Once an entity has sent a termination or termination acknowledgement stanza it MUST NOT send another stanza within the ESession.
6 Implementation Notes
6.1 Multiple-Precision Integers
Before Base-64 encoding, hashing or HMACing an arbitrary-length integer, the integer MUST first be converted to a "big endian" bitstring. The bitstring MUST then be padded with leading zero bits so that there are an integral number of octets. Finally, if the integer is not of fixed bit-length (i.e. not a hash or HMAC result) and the bitstring contains leading octets that are zero, these MUST be removed (so the high-order octet is non-zero).
6.2 XML Normalization
Before the signature or MAC of a block of XML is generated or verified, all character data between all elements MUST be removed and the XML MUST be converted to canonical form (see Canonical XML 17).
All the XML this protocol requires to be signed or MACed is very simple, so in this case, canonicalization SHOULD only require the following changes:
- Set attribute value delimiters to single quotation marks (i.e. simply replace all single quotes in the serialized XML with double quotes)
- Impose lexicographic order on the attributes of "field" elements (i.e. ensure "type" is before "var")
Implementations MAY conceivably also need to make the following changes. Note: Empty elements and special characters SHOULD NOT appear in the signed or MACed XML specified in this protocol.
- Ensure there are no character references
- Convert empty elements to start-end tag pairs
- Ensure there is no whitespace except for single spaces before attributes
- Ensure there are no "xmlns" attributes or namespace prefixes.
7 Security Considerations
7.1 Random Numbers
Weak pseudo-random number generators (PRNG) enable successful attacks. Implementors MUST use a cryptographically strong PRNG to generate all random numbers (see RFC 1750 18).
7.2 Replay Attacks
Alice and Bob MUST ensure that the value of e or d they provide when negotiating each online ESession is unique. This prevents complete online ESessions being replayed.
17Canonical XML 1.0 <http://www.w3.org/TR/xml-c14n>.
7.3 Unverified SAS
Since very few people bother to (consistently) verify SAS, entities SHOULD protect against 'man-in-the-middle' attacks using retained secrets (and/or other secrets). Entities SHOULD remember whether or not the whole chain of retained secrets (and the associated sessions) has ever been validated by the user verifying a SAS.
7.4 Back Doors
The authors and the XSF would like to discourage the deliberate inclusion of "back doors" in implementations of this protocol. However, we recognize that some organizations must monitor stanza sessions or record stanza sessions in decryptable form for legal compliance reasons, or may choose to monitor stanza sessions for quality assurance purposes. In these cases it is important to inform the other entity of the (potential for) disclosure before starting the ESession (if only to maintain public confidence in this protocol).
Both implementations MUST immediately and clearly inform their users if the negotiated value of the 'disclose' field is not 'never'.
Before disclosing any stanza session, an entity SHOULD either negotiate the value of the 'disclose' field to be 'enabled' or terminate the negotiation unsuccessfully. It MUST NOT negotiate the value of the 'disclose' field to be 'disabled' unless it would be illegal for it to divulge the disclosure to the other entity.
In any case an implementation MUST NOT negotiate the value of the 'disclose' field to be 'never' unless it implements no feature or mechanism (not even a disabled feature or mechanism) that could be used directly or indirectly to divulge to any third-party either the identities of the participants, or the keys, or the content of any ESession (or information that could be used to recover any of those items). If an implementation deliberately fails to observe this last point (or fails to correct an accidental back door) then it is not compliant with this protocol and MUST NOT either claim or imply any compliance with this protocol or any of the other protocols developed by the authors or the XSF. In this case the authors and the XSF reserve all rights regarding the names of the protocols.
The expectation is that this legal requirement will persuade many implementors either to tell the users of their products that a back door exists, or not to implement a back door at all (if, once informed, the market demands that).
7.5 Extra Responsibilities of Implementors
Cryptography plays only a small part in an entity's security. Even if it implements this protocol perfectly it may still be vulnerable to other attacks. For examples, an implementation might store ESession keys on swap space or save private keys to a file in cleartext! Implementors MUST take very great care when developing applications with secure technologies.
8 Mandatory to Implement Technologies
An implementation of this protocol MUST support the following algorithms:
- Diffie-Hellman Key Agreement
- The block cipher algorithm "aes128-ctr" (see AES 19)
- The hash algorithm "sha256" (see Secure Hash Standard)
- HMAC (see Section 2 of RFC 2104 20)
- The Short Authentication String generation algorithm "sas28x5" (see The sas28x5 SAS Algorithm)
9 The sas28x5 SAS Algorithm
Given the multi-precision integer MA (a big-endian byte array), the UTF-8 byte string formB (see Hiding Bob’s Identity) and SHA256, the following steps can be used to calculate a 5-character SAS with over 16 million possible values that is easy to read and communicate verbally:
1. Concatenate MA, formB and the UTF-8 byte string "Short Authentication String" into a string of bytes
2. Calculate the least significant 24-bits of the SHA256 of the string
3. Convert the 24-bit integer into a base-28 21 5-character string using the following "digits": acdefghikmopqruvwxy123456789 (the digits have values 0-27)
10 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA) 22.
21 Base-28 was used instead of Base-36 because some characters are often confused when communicated verbally (n, s, b, t, z, j), and because zero is often read as the letter 'o', and the letter 'l' is often read as the number '1'.
22 The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
11 XMPP Registrar Considerations
See Encrypted Session Negotiation.
|
{"Source-Url": "https://xmpp.org/extensions/xep-0217.pdf", "len_cl100k_base": 10998, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 69325, "total-output-tokens": 12851, "length": "2e13", "weborganizer": {"__label__adult": 0.0005140304565429688, "__label__art_design": 0.0003762245178222656, "__label__crime_law": 0.0035343170166015625, "__label__education_jobs": 0.0009694099426269532, "__label__entertainment": 0.00013530254364013672, "__label__fashion_beauty": 0.00021708011627197263, "__label__finance_business": 0.0024051666259765625, "__label__food_dining": 0.00032782554626464844, "__label__games": 0.0014629364013671875, "__label__hardware": 0.006439208984375, "__label__health": 0.0005078315734863281, "__label__history": 0.0004284381866455078, "__label__home_hobbies": 0.00012105703353881836, "__label__industrial": 0.0012035369873046875, "__label__literature": 0.0004150867462158203, "__label__politics": 0.0006766319274902344, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.291015625, "__label__social_life": 0.00010055303573608398, "__label__software": 0.068603515625, "__label__software_dev": 0.61865234375, "__label__sports_fitness": 0.0003294944763183594, "__label__transportation": 0.0007343292236328125, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44923, 0.03261]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44923, 0.20986]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44923, 0.77604]], "google_gemma-3-12b-it_contains_pii": [[0, 147, false], [147, 2682, null], [2682, 5630, null], [5630, 5689, null], [5689, 8610, null], [8610, 10248, null], [10248, 12439, null], [12439, 13967, null], [13967, 15154, null], [15154, 17417, null], [17417, 19026, null], [19026, 20685, null], [20685, 22106, null], [22106, 24123, null], [24123, 25976, null], [25976, 27609, null], [27609, 30418, null], [30418, 32730, null], [32730, 34977, null], [34977, 37085, null], [37085, 38592, null], [38592, 40191, null], [40191, 42980, null], [42980, 44855, null], [44855, 44923, null]], "google_gemma-3-12b-it_is_public_document": [[0, 147, true], [147, 2682, null], [2682, 5630, null], [5630, 5689, null], [5689, 8610, null], [8610, 10248, null], [10248, 12439, null], [12439, 13967, null], [13967, 15154, null], [15154, 17417, null], [17417, 19026, null], [19026, 20685, null], [20685, 22106, null], [22106, 24123, null], [24123, 25976, null], [25976, 27609, null], [27609, 30418, null], [30418, 32730, null], [32730, 34977, null], [34977, 37085, null], [37085, 38592, null], [38592, 40191, null], [40191, 42980, null], [42980, 44855, null], [44855, 44923, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44923, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44923, null]], "pdf_page_numbers": [[0, 147, 1], [147, 2682, 2], [2682, 5630, 3], [5630, 5689, 4], [5689, 8610, 5], [8610, 10248, 6], [10248, 12439, 7], [12439, 13967, 8], [13967, 15154, 9], [15154, 17417, 10], [17417, 19026, 11], [19026, 20685, 12], [20685, 22106, 13], [22106, 24123, 14], [24123, 25976, 15], [25976, 27609, 16], [27609, 30418, 17], [30418, 32730, 18], [32730, 34977, 19], [34977, 37085, 20], [37085, 38592, 21], [38592, 40191, 22], [40191, 42980, 23], [42980, 44855, 24], [44855, 44923, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44923, 0.09182]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
250ed8df35c246cc58a86e073128faf344055268
|
The Pharo object model
The Pharo programming model is heavily inspired by the one of Smalltalk. It is simple and uniform: everything is an object, and objects communicate only by sending each other messages. Instance variables are private to the object. Methods are all public and dynamically looked up (late-bound). In this chapter we present the core concepts of the Pharo object model. We revisit concepts such as self and super and define precisely their semantics. Then we discuss the consequences of representing classes as objects. This will be extended in Chapter: Classes and Metaclasses.
1.1 The rules of the model
The object model is based on a set of simple rules that are applied uniformly. The rules are as follows:
Rule 1. Everything is an object.
Rule 2. Every object is an instance of a class.
Rule 3. Every class has a superclass.
Rule 4. Everything happens by sending messages.
Rule 5. Method lookup dynamically follows the inheritance chain.
Let us look at each of these rules in some detail.
1.2 Everything is an Object
The mantra *everything is an object* is highly contagious. After only a short while working with Pharo, you will start to be surprised at how this rule
simplifies everything you do. Integers, for example, are truly objects, so you can send messages to them, just as you do to any other object. At the end of this chapter, we added an implementation note on the object implementation for the curious reader.
"send '+ 4' to 3, yielding 7"
3 + 4
>>> 7
"send factorial, yielding a big number"
20 factorial
>>> 2432902008176640000
The object 7 is different than the object returned by 20 factorial, but because they are both polymorphic objects, none of the code, not even the implementation of factorial, needs to know about this.
Coming back to everything is an object rule, perhaps the most fundamental consequence of this rule is that classes are objects too. Classes are not second-class objects: they are really first-class objects that you can send messages to, inspect, and change. This means that Pharo is a truly reflective system, which gives a great deal of expressive power to developers.
Important Classes are objects too.
1.3 Every object is an instance of a class
Every object has a class; you can find out which one by sending it the message class.
1 class
>>> SmallInteger
20 factorial class
>>> LargePositiveInteger
'h hello' class
>>> ByteString
(4@5) class
>>> Point
Object new class
>>> Object
A class defines the structure of its instances via instance variables, and the behavior of its instances via methods. Each method has a name, called its selector, which is unique within the class.
Since classes are objects, and every object is an instance of a class, it follows that classes must also be instances of classes. A class whose instances are classes
is called a *metaclass*. Whenever you create a class, the system automatically creates a metaclass for you. The metaclass defines the structure and behavior of the class that is its instance. 99% of the time you will not need to think about metaclasses, and may happily ignore them. (We will have a closer look at metaclasses in Chapter : Classes and Metaclasses.)
1.4 **Instance structure and behavior**
Now we will briefly present how we specify the structure and behavior of instances.
**Instance variables**
Instance variables in Pharo are private to the *instance* itself. This is in contrast to Java and C++, which allow instance variables (also known as *fields* or *member variables*) to be accessed by any other instance that happens to be of the same class. We say that the *encapsulation boundary* of objects in Java and C++ is the class, whereas in Pharo it is the instance.
In Pharo, two instances of the same class cannot access each other’s instance variables unless the class defines *accessor methods*. There is no language syntax that provides direct access to the instance variables of any other object. (Actually, a mechanism called reflection does provide a way to ask another object for the values of its instance variables; meta-programming is intended for writing tools like the object inspector, whose sole purpose is to look inside other objects.)
Instance variables can be accessed by name in any of the instance methods of the class that defines them, and also in the methods defined in its subclasses. This means that Pharo instance variables are similar to *protected* variables in C++ and Java. However, we prefer to say that they are private, because it is considered bad style in Pharo to access an instance variable directly from a subclass.
**Instance encapsulation example**
The method `Point>>dist:` computes the distance between the receiver and another point. The instance variables `x` and `y` of the receiver are accessed directly by the method body. However, the instance variables of the other point must be accessed by sending it the messages `x` and `y`.
```plaintext
Point >> dist: aPoint
"Answer the distance between aPoint and the receiver."
| dx dy |
dx := aPoint x - x.
dy := aPoint y - y.
```
The Pharo object model
\[
\sqrt{(dx \times dx) + (dy \times dy)}
\]
1 \@ 1 dist: 4 \@ 5
\[
>>> 5.0
\]
The key reason to prefer instance-based encapsulation to class-based encapsulation is that it enables different implementations of the same abstraction to coexist. For example, the method `dist:` need not know or care whether the argument `aPoint` is an instance of the same class as the receiver. The argument object might be represented in polar coordinates, or as a record in a database, or on another computer in a distributed system. As long as it can respond to the messages `x` and `y`, the code of method `dist:` (shown above) will still work.
**Methods**
All methods are public and virtual (i.e., dynamically looked up). Methods are grouped into protocols that indicate their intent. Some common protocol names have been established by convention, for example, accessing for all accessor methods, and initialization for establishing a consistent initial state for the object. The protocol private is sometimes used to group methods that should not be seen from outside. Nothing, however, prevents you from sending a message that is implemented by such a "private" method.
Methods can access all instance variables of the object. Some developers prefer to access instance variables only through accessors. This practice has some value, but it also clutters the interface of your classes, and worse, exposes its private state to the world.
**1.5 The instance side and the class side**
Since classes are objects, they can have their own instance variables and their own methods. We call these class instance variables and class methods, but they are really no different from ordinary instance variables and methods: They simply operate on different objects (classes in this case). An instance variable describes instance state and a method describes instance behavior. Similarly, class instance variables are just instance variables defined by a metaclass (and describe the state of classes - instances of metaclasses), and class methods are just methods defined by a metaclass (and that will be executed on classes).
A class and its metaclass are two separate classes, even though the former is an instance of the latter. However, this is largely irrelevant to you as a programmer: you are concerned with defining the behavior of your objects and the classes that create them.
For this reason, the browser helps you to browse both class and metaclass as if they were a single thing with two "sides": the instance side and the class side, as shown in Figure 1.1. By default, when you select a class in the browser, you're
browsing the instance side (i.e., the methods that are executed when messages are sent to an instance of Color). Clicking on the Class side button switches you over to the class side (the methods that will be executed when messages are sent to the class Color itself).
For example, Color blue sends the message blue to the class Color. You will therefore find the method blue defined on the class side of Color, not on the instance side.
```
"Class-side method blue (convenient instance creation method)"
aColor := Color blue.
>>> Color blue
"Color instances are self-evaluating"
"Instance-side accessor method red (returns the red RGB value)"
Color blue red
>>> 0.0
"Instance-side accessor method blue (returns the blue RGB value)"
Color blue blue
>>> 1.0
```
You define a class by filling in the template proposed on the instance side. When you accept this template, the system creates not just the class that you defined, but also the corresponding metaclass (which you can then edit by clicking on the Class side button). The only part of the metaclass creation template that makes sense for you to edit directly is the list of the metaclass’s instance variable names.
Once a class has been created, browsing its instance side (Class side unchecked) lets you edit and browse the methods that will be possessed by instances of that class (and of its subclasses).
Class methods
Class methods can be quite useful; browse Color class for some good examples. You will see that there are two kinds of methods defined on a class: instance creation methods, like Color class>>blue, and those that perform a utility function, like Color class>>wheel:. This is typical, although you will occasionally find class methods used in other ways.
It is convenient to place utility methods on the class side because they can be executed without having to create any additional objects first. Indeed, many of them will contain a comment designed to make it easy to execute them.
Browse method Color class>>wheel:, double-click just at the beginning of the comment "(Color wheel: 12) inspect" and press CMD-d. You will see the effect of executing this method.
For those familiar with Java and C++, class methods may seem similar to static methods. However, the uniformity of the Pharo object model (where classes are just regular objects) means that they are somewhat different: whereas Java static methods are really just statically-resolved procedures, Pharo class methods are dynamically-dispatched methods. This means that inheritance, overriding and super-sends work for class methods in Pharo, whereas they don’t work for static methods in Java.
Class instance variables
With ordinary instance variables, all the instances of a class have the same set of variable names (though each instance has its own private set of values), and the instances of its subclasses inherit those names. The story is exactly the same with class instance variables: each class has its own private class instance variables. A subclass will inherit those class instance variables, but the subclass will have its own private copies of those variables. Just as objects don’t share instance variables, neither do classes and their subclasses share class instance variables.
For example, you could use a class instance variable called count to keep track of how many instances you create of a given class. However, any subclass would have its own count variable, so subclass instances would be counted separately.
Example: Class instance variables and subclasses
Suppose we define the class Dog, and its subclass Hyena. Suppose that we add a count class instance variable to the class Dog (i.e. we define it on the metaclass Dog class). Hyena will naturally inherit the class instance variable count from Dog.
```plaintext
Object subclass: #Dog
instanceVariableNames: ''
```
1.5 The instance side and the class side
```plaintext
Dog class
instanceVariableNames: 'count'
Hyena subclass: #Hyena
instanceVariableNames: ''
classVariableNames: ''
package: 'PBE-CIV'
```
Now suppose we define class methods for `Dog` to initialize its `count` to 0, and to increment it when new instances are created:
```plaintext
Dog class >> initialize
count := 0.
Dog class >> new
count := count + 1.
^ super new
Dog class >> count
^ count
```
Now when we create a new `Dog`, the `count` value of the class `Dog` is incremented, and so is that of the class `Hyena` (but the hyenas are counted separately).
*Side note:* Notice the use of `initialize` on the classes, in the following code. In Pharo, when you instantiate an object such as `Dog new`, `initialize` is called automatically as part of the `new` message send (you can see for yourself by browsing `Behavior>>new`). But with classes, simply defining them does not automatically call `initialize`, and so we have to call it explicitly here. By default class `initialize` methods are automatically executed only when classes are loaded. See also the discussion about lazy initialization, below.
```plaintext
Dog initialize.
Hyena initialize.
Dog count
>>> 0
Hyena count
>>> 0
```
```plaintext
<table>
<thead>
<tr>
<th>aDog</th>
</tr>
</thead>
</table>
aDog := Dog new.
Dog count
>>> 1 "Incremented"
Hyena count
>>> 0 "Still the same"
```
Class instance variables are private to a class in exactly the same way that instance variables are private to an instance. Since classes and their instances are different objects, this has the following consequences:
1. A class does not have access to the instance variables of its own instances. So, the class Color does not have access to the variables of an object instantiated from it, aColorRed. In other words, just because a class was used to create an instance (using new or a helper instance creation method like Color red), it doesn’t give the class any special direct access to that instance’s variables. The class instead has to go through the accessor methods (a public interface) just like any other object.
2. The reverse is also true: an instance of a class does not have access to the class instance variables of its class. In our example above, aDog (an individual instance) does not have direct access to the count variable of the Dog class (except, again, through an accessor method).
**Important** A class does not have access to the instance variables of its own instances. An instance of a class does not have access to the class instance variables of its class.
For this reason, instance initialization methods must always be defined on the instance side, the class side has no access to instance variables, and so cannot initialize them! All that the class can do is to send initialization messages, using accessors, to newly created instances.
Java has nothing equivalent to class instance variables. Java and C++ static variables are more like Pharo class variables (discussed in Section 1.9), since in those languages all of the subclasses and all of their instances share the same static variable.
**Example: Defining a Singleton**
The Singleton pattern provides a typical example of the use of class instance variables and class methods. Imagine that we would like to implement a class WebServer, and to use the Singleton pattern to ensure that it has only one instance.
We define the class WebServer as follow.
```object
Object subclass: #WebServer
instanceVariableNames: 'sessions'
classVariableNames: ''
package: 'Web'
```
Then, clicking on the Class side button, we add the (class) instance variable uniqueInstance.
```webserver
WebServer class
instanceVariableNames: 'uniqueInstance'
```
As a result, the class WebServer class will have a new instance variable (in addition to the variables that it inherits from Behavior, such as superclass
and methodDict). It means that the value of this extra instance variable will describe the instance of the class WebServer class i.e. the class WebServer.
Object class allInstVarNames
>>> "#('superclass' 'methodDict' 'format' 'layout' 'instanceVariables'
'organization' 'subclasses' 'name' 'classPool' 'sharedPools'
'environment' 'category' 'traitComposition' 'localSelectors')"
WebServer class allInstVarNames
>>>"#('superclass' 'methodDict' 'format' 'layout' 'instanceVariables'
'organization' 'subclasses' 'name' 'classPool' 'sharedPools'
'environment' 'category' 'traitComposition' 'localSelectors'
#uniqueInstance)"
We can now define a class method named uniqueInstance, as shown below. This method first checks whether uniqueInstance has been initialized. If it has not, the method creates an instance and assigns it to the class instance variable uniqueInstance. Finally the value of uniqueInstance is returned. Since uniqueInstance is a class instance variable, this method can directly access it.
WebServer class >> uniqueInstance
uniqueInstance ifNil: [ uniqueInstance := self new ].
^ uniqueInstance
The first time that WebServer uniqueInstance is executed, an instance of the class WebServer will be created and assigned to the uniqueInstance variable. The next time, the previously created instance will be returned instead of creating a new one. (This pattern, checking if a variable is nil in an accessor method, and initializing its value if it is nil, is called lazy initialization).
Note that the instance creation code in the code above.
is written as self new and not as WebServer new. What is the difference? Since the uniqueInstance method is defined in WebServer class, you might think that there is no difference. And indeed, until someone creates a subclass of WebServer, they are the same. But suppose that ReliableWebServer is a subclass of WebServer, and inherits the uniqueInstance method. We would clearly expect ReliableWebServer uniqueInstance to answer a ReliableWebServer. Using self ensures that this will happen, since self will be bound to the respective receiver, here the classes WebServer and ReliableWebServer. Note also that WebServer and ReliableWebServer will each have a different value for their uniqueInstance instance variable.
A note on lazy initialization. Do not over-use the lazy initialization pattern. The setting of initial values for instances of objects generally belongs in the initialize method. Putting initialization calls only in initialize helps from a readability perspective – you don’t have to hunt through all the accessor methods to see what the initial values are. Although it may be tempting to instead
initialize instance variables in their respective accessor methods (using ifNil: checks), avoid this unless you have a good reason.
For example, in our uniqueInstance method above, we used lazy initialization because users won’t typically expect to call WebServer initialize. Instead, they expect the class to be “ready” to return new unique instances. Because of this, lazy initialization makes sense. Similarly, if a variable is expensive to initialize (opening a database connection or a network socket, for example), you will sometimes choose to delay that initialization until you actually need it.
1.6 **Every class has a superclass**
Each class in Pharo inherits its behaviour and the description of its structure from a single *superclass*. This means that Smalltalk has single inheritance.
```
SmallInteger superclass
>>> Integer
```
```
Integer superclass
>>> Number
```
```
Number superclass
>>> Magnitude
```
```
Magnitude superclass
>>> Object
```
```
Object superclass
>>> ProtoObject
```
```
ProtoObject superclass
>>> nil
```
Traditionally the root of an inheritance hierarchy is the class Object (since everything is an object). In Pharo, the root is actually a class called ProtoObject, but you will normally not pay any attention to this class. ProtoObject encapsulates the minimal set of messages that all objects *must* have and ProtoObject is designed to raise as many as possible errors (to support proxy definition). However, most classes inherit from Object, which defines many additional messages that almost all objects understand and respond to. Unless you have a very good reason to do otherwise, when creating application classes you should normally subclass Object, or one of its subclasses.
A new class is normally created by sending the message subclass: instance-VariableNames: ... to an existing class. There are a few other methods to create classes. To see what they are, have a look at Class and its subclass creation protocol.
Although Pharo does not provide multiple inheritance, it supports a mechanism called Traits for sharing behaviour across unrelated classes. Traits are
collections of methods that can be reused by multiple classes that are not related by inheritance. Using traits allows one to share code between different classes without duplicating code.
**Abstract methods and abstract classes**
An abstract class is a class that exists to be subclassed, rather than to be instantiated. An abstract class is usually incomplete, in the sense that it does not define all of the methods that it uses. The "placeholder" methods, those that the other methods assume to be (re)defined are called abstract methods.
Pharo has no dedicated syntax to specify that a method or a class is abstract. Instead, by convention, the body of an abstract method consists of the expression `self subclassResponsibility`. This indicates that subclasses have the responsibility to define a concrete version of the method. `self subclassResponsibility` methods should always be overridden, and thus should never be executed. If you forget to override one, and it is executed, an exception will be raised.
Similarly, a class is considered abstract if one of its methods is abstract. Nothing actually prevents you from creating an instance of an abstract class; everything will work until an abstract method is invoked.
**Example: the abstract class Magnitude**
Magnitude is an abstract class that helps us to define objects that can be compared to each other. Subclasses of Magnitude should implement the methods `<`, `=`, and `hash`. Using such messages, Magnitude defines other methods such as `>`, `>=`, `=`; `max`, `min; between:and: ` and others for comparing objects. Such methods are inherited by subclasses. The method `Magnitude>><` is abstract, and defined as shown in the following script.
```smalltalk
< aMagnitude
"Answer whether the receiver is less than the argument."
^self subclassResponsibility
```
By contrast, the method `>=` is concrete, and is defined in terms of `<`.
```smalltalk
>= aMagnitude
"Answer whether the receiver is greater than or equal to the argument."
^(self < aMagnitude) not
```
The same is true of the other comparison methods (they are all defined in terms of the abstract method `<`).
Character is a subclass of Magnitude; it overrides the < method (which, if you recall, is marked as abstract in Magnitude by the use of self subclassResponsibility) with its own version (see the method definition below).
Character also explicitly defines methods = and hash; it inherits from Magnitude the methods >=, <=, ~= and others.
```small
< aCharacter
"Answer true if the receiver's value < aCharacter's value."
^self asciiValue < aCharacter asciiValue
```
**Traits**
A *trait* is a collection of methods that can be included in the behaviour of a class without the need for inheritance. This makes it easy for classes to have a unique superclass, yet still share useful methods with otherwise unrelated classes.
To define a new trait, simply right-click in the class pane and select Add Trait, or replace the subclass creation template by the trait creation template, below.
```plaintext
Trait named: #TAuthor
uses: { }
package: 'PBE-LightsOut'
```
Here we define the trait TAuthor in the package PBE-LightsOut. This trait does not use any other existing traits. In general we can specify a trait composition expression of other traits to use as part of the uses: keyword argument. Here we simply provide an empty array.
Traits may contain methods, but no instance variables. Suppose we would like to be able to add an author method to various classes, independent of where they occur in the hierarchy.
We might do this as follows:
```plaintext
TAuthor >> author
"Returns author initials"
^ 'on' "oscar nierstrasz"
```
Now we can use this trait in a class that already has its own superclass, for instance the LOGame class that we defined in Chapter : A First Application. We simply modify the class creation template for LOGame to include a uses: keyword argument that specifies that TAuthor should be used.
```plaintext
BorderedMorph subclass: #LOGame
uses: TAuthor
instanceVariableNames: 'cells'
classVariableNames: ''
```
1.7 Everything happens by sending messages
```plaintext
package: 'PBE-LightsOut'
If we now instantiate LOGame, it will respond to the author message as ex-
pected.
```LOGame new author
```[author]
```[on]
Trait composition expressions may combine multiple traits using the + opera-
tor. In case of conflicts (i.e., if multiple traits define methods with the same
name), these conflicts can be resolved by explicitly removing these meth-
ods with -), or by redefining these methods in the class or trait that you are
defining. It is also possible to alias methods (with @), providing a new name for
them.
Traits are used in the system kernel. One good example is the class Behavior.
Object subclass: #Behavior
uses: TPureBehavior @
[#basicAddTraitSelector:withMethod:->#addTraitSelector:withMethod:]
instanceVariableNames: 'superclass methodDict format'
classVariableNames: 'ObsoleteSubclasses'
package: 'Kernel-Classes'
Here we see that the method addTraitSelector:withMethod: defined in
the trait TPureBehavior has been aliased to basicAddTraitSelector:with-
Method:
1.7 Everything happens by sending messages
This rule captures the essence of programming in Pharo.
In procedural programming (and in some static features of some object-
oriented languages such as Java), the choice of which piece of code to execute
when a procedure is called is made by the caller. The caller chooses the proce-
dure to execute statically, by name.
In Pharo, we do not "invoke methods". Instead, we send messages. This is
just a terminology point but it is significant. It implies that this is not the
responsibility of the client to select the method to be executed, it is the one of
the receiver of the message.
When sending a message, we do not decide which method will be executed.
Instead, we tell an object to do something for us by sending it a message. A
message is nothing but a name and a list of arguments. The receiver then
decides how to respond by selecting its own method for doing what was asked.
Since different objects may have different methods for responding to the
same message, the method must be chosen dynamically, when the message is
received.
As a consequence, we can send the same message to different objects, each of which may have its own method for responding to the message. We do not tell the SmallInteger 3 or the Point (1@2) how to respond to the message + 4. Each has its own method for +, and responds to + 4 accordingly.
One of the consequences of Pharo’s model of message sending is that it encourages a style in which objects tend to have very small methods and delegate tasks to other objects, rather than implementing huge, procedural methods that assume too much responsibility. Joseph Pelrine expresses this principle succinctly as follows:
"Don’t do anything that you can push off onto someone else."
Many object-oriented languages provide both static and dynamic operations for objects. In Pharo there are only dynamic message sends. For example, instead of providing static class operations, we simply send messages to classes (which are simply objects).
Ok, so nearly everything in Pharo happens by sending messages. At some point action must take place:
Variable declarations are not message sends. In fact, variable declarations are not even executable. Declaring a variable just causes space to be allocated for an object reference.
Assignments are not message sends. An assignment to a variable causes that variable name to be freshly bound in the scope of its definition.
Returns are not message sends. A return simply causes the computed result to be returned to the sender.
Primitives (and Pragmas/annotations) are not message sends. They are implemented in the virtual machine.
Other than these few exceptions, pretty much everything else does truly happen by sending messages. In particular, since there are no public fields in Pharo, the only way to update an instance variable of another object is to send it a message asking that it update its own field. Of course, providing setter and getter methods for all the instance variables of an object is not good object-oriented style. Joseph Pelrine also states this very nicely:
"Don’t let anyone else play with your data."
1.8 Method lookup follows the inheritance chain
What exactly happens when an object receives a message? This is a two step process: method lookup and method execution.
Lookup. First, the method having the same name as the message is looked up.
Method Execution. Second, the found method is applied to the receiver with the message arguments: When the method is found, the arguments are bound to the parameters of the method, and the virtual machine executes it.
The lookup process is quite simple:
1. The class of the receiver looks up the method to use to handle the message.
2. If this class does not have that method method defined, it asks its superclass, and so on, up the inheritance chain.
It is essentially as simple as that. Nevertheless there are a few questions that need some care to answer:
- What happens when a method does not explicitly return a value?
- What happens when a class reimplements a superclass method?
- What is the difference between self and super sends?
- What happens when no method is found?
The rules for method lookup that we present here are conceptual; virtual machine implementors use all kinds of tricks and optimizations to speed up method lookup. That’s their job, but you should never be able to detect that they are doing something different from our rules.
First let us look at the basic lookup strategy, and then consider these further questions.
Method lookup
Suppose we create an instance of EllipseMorph.
```smalltalk
anEllipse := EllipseMorph new.
```
If we now send this object the message defaultColor, we get the result Color yellow.
```smalltalk
anEllipse defaultColor
>>> Color yellow
```
The class EllipseMorph implements defaultColor, so the appropriate method is found immediately.
```smalltalk
EllipseMorph >> defaultColor
"Answer the default color/fill style for the receiver"
^ Color yellow
```
Figure 1.2: Method lookup follows the inheritance hierarchy
In contrast, if we send the message `openInWorld` to `anEllipse`, the method is not immediately found, since the class `EllipseMorph` does not implement `openInWorld`. The search therefore continues in the superclass, `BorderedMorph`, and so on, until an `openInWorld` method is found in the class `Morph` (see Figure 1.2).
```plaintext
Morph >> openInWorld
"Add this morph to the world."
self openInWorld: self currentWorld
```
**Returning self**
Notice that `EllipseMorph>>defaultColor` explicitly returns `Color yellow`, whereas `Morph>>openInWorld` does not appear to return anything.
Actually a method *always* answers a message with a value (which is, of course, an object). The answer may be defined by the `^` construct in the method, but if execution reaches the end of the method without executing a `^`, the method still answers a value – it answers the object that received the message. We usually say that the method answers *self*, because in Pharo the pseudo-variable `self` represents the receiver of the message, much like the keyword `this` in Java. Other languages, such as Ruby, by default return the value of the last statement in the method. Again, this is not the case in Pharo, instead you can imagine that a method without an explicit return ends with `^ self`.
**Important** `self` represents the receiver of the message.
This suggests that `openInWorld` is equivalent to `openInWorldReturnSelf`, defined below.
1.8 Method lookup follows the inheritance chain
```
Morph >> openInWorld
"Add this morph to the world."
self openInWorld: self currentWorld
^ self
```
Why is explicitly writing `^ self` not a so good thing to do? When you return something explicitly, you are communicating that you are returning something of interest to the sender. When you explicitly return `self`, you are saying that you expect the sender to use the returned value. This is not the case here, so it is best not to explicitly return `self`. We only return `self` on special case to stress that the receiver is returned.
This is a common idiom in Pharo, which Kent Beck refers to as **Interesting return value**:
"Return a value only when you intend for the sender to use the value."
**Important** By default (if not specified differently) a method returns the message receiver.
**Overriding and extension**
If we look again at the `EllipseMorph` class hierarchy in Figure 1.2, we see that the classes `Morph` and `EllipseMorph` both implement `defaultColor`. In fact, if we open a new morph (`Morph new openInWorld`) we see that we get a blue morph, whereas an ellipse will be yellow by default.
We say that `EllipseMorph` overrides the `defaultColor` method that it inherits from `Morph`. The inherited method no longer exists from the point of view of an `Ellipse`.
Sometimes we do not want to override inherited methods, but rather extend them with some new functionality, that is, we would like to be able to invoke the overridden method in addition to the new functionality we are defining in the subclass. In Pharo, as in many object-oriented languages that support single inheritance, this can be done with the help of `super` sends.
A frequent application of this mechanism is in the `initialize` method. Whenever a new instance of a class is initialized, it is critical to also initialize any inherited instance variables. However, the knowledge of how to do this is already captured in the `initialize` methods of each of the superclass in the inheritance chain. The subclass has no business even trying to initialize inherited instance variables!
It is therefore good practice whenever implementing an `initialize` method to send `super initialize` before performing any further initialization:
```
BorderedMorph >> initialize
"initialize the state of the receiver"
super initialize.
```
We need super sends to compose inherited behaviour that would otherwise be overridden.
Important It is a good practice that an initialize method start by sending super initialize.
Self sends and super sends
self represents the receiver of the message and the lookup of the method starts in the class of the receiver. Now what is super? super is not the superclass! It is a common and natural mistake to think this. It is also a mistake to think that lookup starts in the superclass of the class of the receiver.
Important self represents the receiver of the message and the method lookup starts in the class of the receiver.
How do self sends differ from super sends?
Like self, super represents the receiver of the message. Yes you read it well! The only thing that changes is the method lookup. Instead of lookup starting in the class of the receiver, it starts in the superclass of the class of the method where the super send occurs.
Important super represents the receiver of the message and the method lookup starts in the superclass of the class of the method where the super send occurs.
We shall see with the following example precisely how this works. Imagine that we define the following three methods:
First we define the method fullPrintOn: on class Morph that just adds to the stream the name of the class followed by the string ’new’ - the idea is that we could execute the resulting string and gets back an instance similar to the receiver.
```plaintext
Morph >> fullPrintOn: aStream
aStream nextPutAll: self class name, ' new'
```
Second we define the method constructorString that send the message fullPrintOn:.
```plaintext
Morph >> constructorString
^ String streamContents: [ :s | self fullPrintOn: s ].
```
Finally, we define the method fullPrintOn: on the class BorderedMorph superclass of EllipseMorph. This new method extends the superclass behavior: it invokes it and adds extra behavior.
Figure 1.3: self and super sends
```
BorderedMorph >> fullPrintOn: aStream
aStream nextPutAll: '('.
super fullPrintOn: aStream.
aStream
nextPutAll: ')' setBorderWidth: ';
print: borderWidth;
nextPutAll: ' borderColor: ', (self colorString: borderColor)
```
Consider the message constructorString sent to an instance of Ellipse-Morph:
```
EllipseMorph new constructorString
>>> '(EllipseMorph new) setBorderWidth: 1 borderColor: Color black'
```
How exactly is this result obtained through a combination of self and super sends? First, anEllipse constructorString will cause the method constructorString to be found in the class Morph, as shown in Figure 1.3.
The method Morph>>constructorString performs a self send of fullPrintOn:. The message fullPrintOn: is looked up starting in the class EllipseMorph, and the method BorderedMorph>>fullPrintOn: is found in BorderedMorph (see Figure 1.3). What is critical to notice is that the self send causes the method lookup to start again in the class of the receiver, namely the class of anEllipse.
At this point, BorderedMorph>>fullPrintOn: does a super send to extend the fullPrintOn: behaviour it inherits from its superclass. Because this is a super send, the lookup now starts in the superclass of the class where the super send occurs, namely in Morph. We then immediately find and evaluate Morph>>fullPrintOn::
Stepping back
A self send is dynamic in the sense that by looking at the method containing it, we cannot predict which method will be executed. Indeed an instance of a subclass may receive the message containing the self expression and redefine the method in that subclass. Here EllipseMorph could redefine the method fullPrintOn: and this method would be executed by method constructorString. Note that by only looking at the method constructorString, we cannot predict which fullPrintOn: method (either the one of EllipseMorph, BorderedMorph, or Morph) will be executed when executing the method constructorString, since it depends on the receiver the constructorString message.
Important A self send triggers a method lookup starting in the class of the receiver. A self send is dynamic in the sense that by looking at the method containing it, we cannot predict which method will be executed.
Note that the super lookup did not start in the superclass of the receiver. This would have caused lookup to start from BorderedMorph, resulting in an infinite loop!
If you think carefully about super send and Figure 1.3, you will realize that super bindings are static: all that matters is the class in which the text of the super send is found. By contrast, the meaning of self is dynamic: it always represents the receiver of the currently executing message. This means that all messages sent to self are looked up by starting in the receiver’s class.
Important A super send triggers a method lookup starting in the superclass of the class of the method performing the super send. We say that super sends are static because just looking at the method we know the class where the lookup should start (the class above the class containing the method).
Message not understood
What happens if the method we are looking for is not found?
Suppose we send the message foo to our ellipse. First the normal method lookup would go through the inheritance chain all the way up to Object (or rather ProtoObject) looking for this method. When this method is not found, the virtual machine will cause the object to send self doesNotUnderstand: #foo. (See Figure 1.4.)
Now, this is a perfectly ordinary, dynamic message send, so the lookup starts again from the class EllipseMorph, but this time searching for the method doesNotUnderstand:. As it turns out, Object implements doesNotUnderstand:. This method will create a new MessageNotUnderstood object which is capable of starting a Debugger in the current execution context.
1.9 Shared variables
Why do we take this convoluted path to handle such an obvious error? Well, this offers developers an easy way to intercept such errors and take alternative action. One could easily override the method `Object>>doesNotUnderstand:` in any subclass of `Object` and provide a different way of handling the error.
In fact, this can be an easy way to implement automatic delegation of messages from one object to another. A `Delegator` object could simply delegate all messages it does not understand to another object whose responsibility it is to handle them, or raise an error itself!
1.9 Shared variables
Now we will look at an aspect of Pharo that is not so easily covered by our five rules: shared variables.
Pharo provides three kinds of shared variables:
1. **Globally** shared variables.
2. **Class variables**: variables shared between instances and classes. (Not to be confused with class instance variables, discussed earlier).
3. **Pool variables**: variables shared amongst a group of classes.
The names of all of these shared variables start with a capital letter, to warn us that they are indeed shared between multiple objects.
**Global variables**
In Pharo, all global variables are stored in a namespace called `Smalltalk`, which is implemented as an instance of the class `SystemDictionary`. Global variables are accessible everywhere. Every class is named by a global variable. In addition, a few globals are used to name special or commonly useful objects.
The variable Processor names an instance of ProcessScheduler, the main process scheduler of Pharo.
```
Processor class
>>> ProcessorScheduler
```
Other useful global variables
**Smalltalk** is the instance of SmalltalkImage. It contains many functionality to manage the system. In particular it holds a reference to the main namespace Smalltalk globals. This namespace includes Smalltalk itself since it is a global variable. The keys to this namespace are the symbols that name the global objects in Pharo code. So, for example:
```
Smalltalk globals at: #Boolean
>>> Boolean
Since Smalltalk is itself a global variable:
```
Smalltalk globals at: #Smalltalk
>>> Smalltalk
(Smalltalk globals at: #Smalltalk) == Smalltalk
>>> true
```
**World** is an instance of PasteUpMorph that represents the screen. World bounds answers a rectangle that defines the whole screen space; all Morphs on the screen are submorphs of World.
**ActiveHand** is the current instance of HandMorph, the graphical representation of the cursor. ActiveHand’s submorphs hold anything being dragged by the mouse.
**Undeclared** is another dictionary, which contains all the undeclared variables. If you write a method that references an undeclared variable, the browser will normally prompt you to declare it, for example as a global or as an instance variable of the class. However, if you later delete the declaration, the code will then reference an undeclared variable. Inspecting Undeclared can sometimes help explain strange behaviour!
Using globals in your code
The recommended practice is to strictly limit the use of global variables. It is usually better to use class instance variables or class variables, and to provide class methods to access them. Indeed, if Pharo were to be implemented from scratch today, most of the global variables that are not classes would be replaced by singletons.
The usual way to define a global is just to perform Do it on an assignment to a capitalized but undeclared identifier. The parser will then offer to declare the
1.9 Shared variables
Figure 1.5: Instance and class methods accessing different variables
global for you. If you want to define a global programmatically, just execute Smalltalk globals at: #AGlobalName put: nil. To remove it, execute Smalltalk globals removeKey: #AGlobalName.
Class variables
Sometimes we need to share some data amongst all the instances of a class and the class itself. This is possible using class variables. The term class variable indicates that the lifetime of the variable is the same as that of the class. However, what the term does not convey is that these variables are shared amongst all the instances of a class as well as the class itself, as shown in Figure 1.5. Indeed, a better name would have been shared variables since this expresses more clearly their role, and also warns of the danger of using them, particularly if they are modified.
In Figure 1.5 we see that rgb and cachedDepth are instance variables of Color, hence only accessible to instances of Color. We also see that superclass, subclass, methodDict and so on are class instance variables, i.e., instance variables only accessible to the Color class.
But we can also see something new: ColorRegistry and CachedColormaps are class variables defined for Color. The capitalization of these variables gives us a hint that they are shared. In fact, not only may all instances of Color access these shared variables, but also the Color class itself, and any of its subclasses. Both instance methods and class methods can access these shared variables.
A class variable is declared in the class definition template. For example, the class Color defines a large number of class variables to speed up color creation; its definition is shown below.
```smalltalk
Object subclass: #Color
instanceVariableNames: 'rgb cachedDepth cachedBitPattern alpha'
classVariableNames: 'BlueShift CachedColormaps ColorRegistry ComponentMask GrayToIndexMap GreenShift HalfComponentMask'
```
The class variable ColorRegistry is an instance of IdentityDictionary containing the frequently-used colors, referenced by name. This dictionary is shared by all the instances of Color, as well as the class itself. It is accessible from all the instance and class methods.
Class initialization
The presence of class variables raises the question: how do we initialize them?
One solution is lazy initialization (discussed earlier in this chapter). This can be done by introducing an accessor method which, when executed, initializes the variable if it has not yet been initialized. This implies that we must use the accessor all the time and never use the class variable directly. This furthermore imposes the cost of the accessor send and the initialization test. It also arguably defeats the point of using a class variable, since in fact it is no longer shared.
Another solution is to override the class method initialize (we’ve seen this before in the Dog example).
```
Color class >> initialize
...
self initializeColorRegistry.
...
```
If you adopt this solution, you will need to remember to invoke the initialize method after you define it (by evaluating Color initialize). Although class side initialize methods are executed automatically when code is loaded into memory (from a Monticello repository, for example), they are not executed automatically when they are first typed into the browser and compiled, or when they are edited and re-compiled.
Pool variables
Pool variables are variables that are shared between several classes that may not be related by inheritance. Pool variables were originally stored in pool dictionaries; now they should be defined as class variables of dedicated classes (subclasses of SharedPool). Our advice is to avoid them; you will need them only in rare and specific circumstances. Our goal here is therefore to explain pool variables just enough so that you can understand them when you are reading code.
A class that accesses a pool variable must mention the pool in its class definition. For example, the class Text indicates that it is using the pool dictionary TextConstants, which contains all the text constants such as CR and LF. This
1.10 Internal object implementation note
dictionary has a key #CR that is bound to the value Character cr, i.e., the carriage return character.
ArrayedCollection subclass: #Text
instanceVariableNames: 'string runs'
classVariableNames: '
poolDictionaries: 'TextConstants'
package: 'Collections-Text'
This allows methods of the class Text to access the keys of the dictionary in the method body directly, i.e., by using variable syntax rather than an explicit dictionary lookup. For example, we can write the following method.
Text >> testCR
^ CR == Character cr
Once again, we recommend that you avoid the use of pool variables and pool dictionaries.
1.10 Internal object implementation note
Here is an implementation note for people that really want to go deep inside the way Pharo represents internally objects. The implementation distinguished between two different kinds of objects: # Objects with zero or more fields that are passed by reference and exist on the Pharo heap. # Immediate objects that are passed by value. Depending on version, these are a range of the integers called SmallInteger, all Character objects and possibly a sub-range of 64-bit floating-point numbers called SmallFloat64. In the implementation, such immediate objects occupy an object pointer, most of whose bits encode the immediate’s value and some of the bits encode the object’s class.
The first kind of object, an ordinary object, comes in a number of varieties:
1. Normal objects that have zero or more named instance variables, such as Point which has an x and a y instance variable. Each instance variable holds an object pointer, which can be a reference to another ordinary object or an immediate.
2. Indexable objects like arrays that have zero or more indexed instance variables numbered from 1 to N. Each indexed instance variable holds an object pointer, which can be a reference to another ordinary object or an immediate. Indexable objects are accessed using the messages at: and at:put:. For example ((Array new: 1) at: 1 put: 2; at: 1) answers 2.
3. Objects like Closure or Context that have both named instance variables and indexed instance variables. In the object, the indexed instance variables follow the named instance variables.
4. Objects like ByteString or Bitmap that have indexed instance variables numbered from 1 to N that contain raw data. Each datum may occupy 8, 16 or 32-bits, depending on its class definition. The data can be accessed as either integers, characters or floating-point numbers, depending on how methods at: and at:put: are implemented. The at: and at:put: methods convert between Pharo objects and raw data, hiding the internal representation, but allowing the system to represent efficiently data such as strings, and bitmaps.
The beauty of Pharo is that you normally don’t need to care about the differences between these three kinds of object.
1.11 Chapter summary
The object model of Pharo is both simple and uniform. Everything is an object, and pretty much everything happens by sending messages.
- Everything is an object. Primitive entities like integers are objects, but also classes are first-class objects.
- Every object is an instance of a class. Classes define the structure of their instances via private instance variables and the behaviour of their instances via public methods. Each class is the unique instance of its metaclass. Class variables are private variables shared by the class and all the instances of the class. Classes cannot directly access instance variables of their instances, and instances cannot access instance variables of their class. Accessors must be defined if this is needed.
- Every class has a superclass. The root of the single inheritance hierarchy is ProtoObject. Classes you define, however, should normally inherit from Object or its subclasses. There is no syntax for defining abstract classes. An abstract class is simply a class with an abstract method (one whose implementation consists of the expression self subclassResponsibility). Although Pharo supports only single inheritance, it is easy to share implementations of methods by packaging them as traits.
- Everything happens by sending messages. We do not call methods, we send messages. The receiver then chooses its own method for responding to the message.
- Method lookup follows the inheritance chain; self sends are dynamic and start the method lookup in the class of the receiver, whereas super sends start the method lookup in the superclass of class in which the super send is written. From this perspective super sends are more static than self sends.
- There are three kinds of shared variables. Global variables are accessible everywhere in the system. Class variables are shared between a class, its subclasses and its instances. Pool variables are shared between a
selected set of classes. You should avoid shared variables as much as possible.
|
{"Source-Url": "https://ci.inria.fr/pharo-contribution/job/UpdatedPharoByExample/lastSuccessfulBuild/artifact/book-result/PharoObjectModel/PharoObjectModel.pdf", "len_cl100k_base": 11236, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 54762, "total-output-tokens": 12572, "length": "2e13", "weborganizer": {"__label__adult": 0.0003578662872314453, "__label__art_design": 0.0002155303955078125, "__label__crime_law": 0.00020194053649902344, "__label__education_jobs": 0.0005011558532714844, "__label__entertainment": 4.398822784423828e-05, "__label__fashion_beauty": 0.00011909008026123048, "__label__finance_business": 0.00012022256851196288, "__label__food_dining": 0.0002789497375488281, "__label__games": 0.0004112720489501953, "__label__hardware": 0.0003833770751953125, "__label__health": 0.0002117156982421875, "__label__history": 0.0001361370086669922, "__label__home_hobbies": 6.35981559753418e-05, "__label__industrial": 0.00020456314086914065, "__label__literature": 0.00020945072174072263, "__label__politics": 0.000164031982421875, "__label__religion": 0.00037932395935058594, "__label__science_tech": 0.0011129379272460938, "__label__social_life": 7.647275924682617e-05, "__label__software": 0.00370025634765625, "__label__software_dev": 0.990234375, "__label__sports_fitness": 0.0002294778823852539, "__label__transportation": 0.00030040740966796875, "__label__travel": 0.00016891956329345703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53009, 0.01054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53009, 0.7876]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53009, 0.90388]], "google_gemma-3-12b-it_contains_pii": [[0, 1204, false], [1204, 2840, null], [2840, 5101, null], [5101, 7743, null], [7743, 9115, null], [9115, 11602, null], [11602, 13216, null], [13216, 15495, null], [15495, 18190, null], [18190, 20319, null], [20319, 22483, null], [22483, 24465, null], [24465, 26633, null], [26633, 28705, null], [28705, 30585, null], [30585, 32097, null], [32097, 34488, null], [34488, 36422, null], [36422, 37827, null], [37827, 40349, null], [40349, 41854, null], [41854, 43904, null], [43904, 45880, null], [45880, 48076, null], [48076, 50333, null], [50333, 52930, null], [52930, 53009, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1204, true], [1204, 2840, null], [2840, 5101, null], [5101, 7743, null], [7743, 9115, null], [9115, 11602, null], [11602, 13216, null], [13216, 15495, null], [15495, 18190, null], [18190, 20319, null], [20319, 22483, null], [22483, 24465, null], [24465, 26633, null], [26633, 28705, null], [28705, 30585, null], [30585, 32097, null], [32097, 34488, null], [34488, 36422, null], [36422, 37827, null], [37827, 40349, null], [40349, 41854, null], [41854, 43904, null], [43904, 45880, null], [45880, 48076, null], [48076, 50333, null], [50333, 52930, null], [52930, 53009, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53009, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53009, null]], "pdf_page_numbers": [[0, 1204, 1], [1204, 2840, 2], [2840, 5101, 3], [5101, 7743, 4], [7743, 9115, 5], [9115, 11602, 6], [11602, 13216, 7], [13216, 15495, 8], [15495, 18190, 9], [18190, 20319, 10], [20319, 22483, 11], [22483, 24465, 12], [24465, 26633, 13], [26633, 28705, 14], [28705, 30585, 15], [30585, 32097, 16], [32097, 34488, 17], [34488, 36422, 18], [36422, 37827, 19], [37827, 40349, 20], [40349, 41854, 21], [41854, 43904, 22], [43904, 45880, 23], [45880, 48076, 24], [48076, 50333, 25], [50333, 52930, 26], [52930, 53009, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53009, 0.00588]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6a38cbdd89c416aa2c3397d67af0ddf578cc6fa4
|
On Evidence Preservation Requirements for Forensic-Ready Systems
Conference or Workshop Item
How to cite:
For guidance on citations see FAQs.
© 2017 ACM
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
http://dx.doi.org/doi:10.1145/3106237.3106308
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
oro.open.ac.uk
On Evidence Preservation Requirements for Forensic-Ready Systems
Dalal Alrajeh
Imperial College London
London, UK
Liliana Pasquale
University College Dublin
Dublin, Ireland
Bashar Nuseibeh
The Open University, UK, &
Lero, Ireland
ABSTRACT
Forensic readiness denotes the capability of a system to support
digital forensic investigations of potential, known incidents by pre-
serving in advance data that could serve as evidence explaining how
an incident occurred. Given the increasing rate at which (poten-
tially criminal) incidents occur, designing software systems that are
forensic-ready can facilitate and reduce the costs of digital forensic
investigations. However, to date, little or no attention has been
given to how forensic-ready software systems can be designed sys-
tematically. In this paper we propose to explicitly represent evidence
preservation requirements prescribing preservation of the minimal
amount of data that would be relevant to a future digital investiga-
tion. We formalise evidence preservation requirements and propose
an approach for synthesising specifications for systems to meet
these requirements. We present our prototype implementation—
based on a satisfiability solver and a logic-based learner—which
we use to evaluate our approach, applying it to two digital foren-
sic corpora. Our evaluation suggests that our approach preserves
relevant data that could support hypotheses of potential incidents.
Moreover, it enables significant reduction in the volume of data
that would need to be examined during an investigation.
CCS CONCEPTS
• Software and its engineering → Requirements analysis; • Ap-
plied computing → Evidence collection, storage and analysis;
KEYWORDS
Forensic-ready systems, requirements, specification synthesis
ACM Reference Format:
Dalal Alrajeh, Liliana Pasquale, and Bashar Nuseibeh. 2017. On Evidence
Preservation Requirements for Forensic-Ready Systems. In Proceedings of
2017 11th Joint Meeting of the European Software Engineering Conference and
the ACM SIGSOFT Symposium on the Foundations of Software Engineering,
Paderborn, Germany, September 4–8, 2017 (ESEC/FSE ’17), 11 pages.
https://doi.org/10.1145/3106237.3106308
1 INTRODUCTION
Digital forensic investigations are concerned with the discovery,
collection, preservation, analysis, interpretation and presentation
of digital data from digital sources, for proof of incident and ulti-
mately for prosecution of criminal activity [16, 36]. Such data often
comprises log entries indicating the occurrence of events in the
digital sources placed within the environment in which an inci-
dent can occur. Despite the availability of digital forensics tools
tools are designed to be used only after an incident occurs and
an investigation commences. However, some of the relevant data
may not be available then because, for example, it was stored in
its volatile memory or it has been intentionally tampered with by
an offender. Moreover, digital forensic tools do not select among
the data that might be relevant for investigating a specific incident,
thus requiring investigators to sift through large volumes of data
to determine what may be considered as relevant evidence. This
can be a cumbersome and an error-prone process [29, 54].
Software systems should be forensic-ready [45], i.e., able to sup-
port digital forensic investigations of potential, known incidents
by preserving in advance data that may serve as evidence explain-
ing how an incident occurred. Given the increasing rate at which
(potentially criminal) incidents occur, designing software systems
that are forensic-ready can facilitate and reduce the costs of digital
forensic investigations. However, existing research has provided
only generic guidelines capturing operational and infrastructural
capabilities for organisations to achieve forensic readiness [15, 42].
Little or no attention has been given to how forensic-ready systems
can be designed and verified systematically, nor to how to ensure
their suitability for the specific environments in which they will
be deployed [51]. Without a formal conceptualualisation of a forensic-
ready system and a software design methodology for achieving it,
ensuring the soundness of any automated investigative process or
its outcome becomes difficult, or even impossible [43].
Forensic-ready systems should satisfy evidence preservation re-
quirements, i.e. ensure preservation of relevant and minimal evi-
dence. On the one hand, preservation of an excessive amount of
data often introduces resource and performance issues [40], and
increases the cognitive load on investigators who have to make
sense of a large data-set. On the other hand, preservation of an
insufficient amount of data provides an incomplete picture of how
an incident would have occurred, thus making way for misguided
decisions and potentially wrong convictions [12].
This paper addresses some of the challenges in the systematic
design of forensic-ready systems by making the following contribu-
tions: (i) precise definition of evidence preservation requirements
and the concepts upon which these requirements depend; (ii) a
method for generating preservation specifications that satisfy evi-
dence preservation requirements and (iii) a prototype tool for gen-
erating such specifications automatically, which we use to evaluate
our approach. More specifically, we first provide formal definitions
of the domain model of a forensic-ready system including the environment in which incidents may occur and hypotheses about such incidents. We also formalise preservation specifications and requirements. We then present a synthesis approach that combines deductive reasoning and inductive learning to generate preservation specifications from a formal description of an environment and hypotheses. To the best of our knowledge this is the first work that conceptualises the requirements of evidence preservation and demonstrates its benefits in reducing data needed to be examined during an investigation.
In this work, we assume a domain expert provides a set of correct incident hypotheses. To achieve this aim s/he can follow a proper risk assessment methodology (using approaches such as [9]) over the threats/incidents considered most likely and highly-critical. Specifications are expressed declaratively in Linear Temporal Logic (LTL) [39], a commonly used formalism for specifying a system behaviour, and prescribe constraints over when events happening at the digital sources in the environment must be logged. We also assume that a designated software controller, the forensic readiness controller, is responsible for the enactment of the specification by interacting with the digital sources through a uniform interface. We evaluate our approach by applying it to two substantive case studies publicly available. Our evaluation suggests that our approach preserves relevant data to explain potential incidents and enables significant reduction in the volume of data that would need to be examined during an investigation.
The rest of the paper is organised as follows. Section 2 presents a motivating example and an overview of our approach. Sections 3 and 4 formalise the domain model of a forensic-ready system and preservation specifications and requirements. Sections 5 and 6 explain our approach to generate specifications and its implementation. Section 7 presents the results of our evaluation. Section 8 gives an overview of related work, and Section 9 concludes.
2 MOTIVATING EXAMPLE AND OVERVIEW
We motivate our work using an example of a corporate fraud incident, inspired by the Galleon Group case [35]. We consider an environment (Fig. 1a) within an enterprise building, where two employees, bob and alice, work and are provided with laptops (m2 and m3, respectively) by the company. A sensitive document doc is stored on the server machine m1 located in the office r01. Access to r01 is controlled by an NFC reader (nfc) and is monitored by a CCTV camera (cctv). Both alice and bob are authorised to access r01 and to login to m1.
Suppose that a digital investigation related to the exfiltration of the doc is initiated. An investigator may hypothesise that the doc was copied onto a storage device mounted on m1. To verify this, she must first speculate the activities that may have occurred within the environment and reconstruct different possible scenarios based on these activities. A possible scenario is that alice enters room r01, logs into m1, mounts usb1 on m1 and copies doc onto usb1. Another is that bob enters room r01 but logs into m1 using alice’s credentials, and then copies the doc onto usb1.
Once the scenarios have been identified, the investigator must establish which of the digital devices holds data that might be relevant to the scenarios. Then s/he must search through the storage of these devices (e.g., logs for all readers and CCTVs, and hard drives for all computers) for relevant information. However, the investigator may fail to find information about storage devices mounted on m1 because the system log of m1 only retains information about the last device that was mounted. The large number of events that can occur also does not make it plausible to preserve all events in advance for later examination. For example, corporate computers can generate over 100 millions of events per week, while the number of accesses recorded by CCTV cameras and card readers can exponentially grow with the number of employees and rooms in a building. Moreover, not all the events are relevant to support the speculated scenarios. For instance, only copies of the doc taking place while a storage device is mounted and a user is logged on m1 are relevant for our example.
Our approach (Fig. 1b) aims to design a FR controller that receives events from digital sources within the environment and selectively preserves them in a secure storage. These events can be acquired and examined by an investigator during future digital investigations. We provide an automated approach (Specification Generation) to generate a preservation specification (PS) for the FR controller. We assume a domain expert (e.g., security administrator or software engineer) provides a description of the environment and a predefined set of hypotheses about incidents of concern. The description of the environment also includes information about what activities can be monitored by the digital sources. To generate a specification, we first check whether the hypotheses are feasible within the environment (i.e., they may hold if certain activities take place in the environment). If this is the case, the approach generates a set of possible sequences of low-level system events (called potential histories) that demonstrate this. The approach then verifies whether the existing specification already ensures the preservation of events that correspond to these generated histories. If not, then our approach inductively synthesises a preservation specification that configures when an event occurring in a digital source within
---
We estimated having 50 Events Per Second (EPS) during non peaks and 2500 EPS during peaks. An organisation that experiences peaks for 5% of the total time will have an average of 215 EPS (125 EPS for non-peak and 90 for peak).
the considered environment should be preserved, according to their relevance to the hypotheses. For our motivating example, our approach would prescribe preservation of events indicating copies of the doc only if a storage device was previously mounted and a user has previously logged on m1. In the next two sections we define the artefacts which constitute the domain of a forensic-ready system (environment, histories and hypotheses) and their relation to preservation specifications and requirements.
3 FORENSIC DOMAIN MODEL
We provide here a formal underpinning of the concepts and terminology that are commonly used in digital forensic domain [11] to describe the environment in which an incident can occur, histories and the incident hypotheses.
3.1 Environment Description
The environment description is a set of descriptive statements about: (i) the context in which an incident may occur, (ii) the behaviour that may be exhibited within the environment and (iii) their interactions.
A context description C is a collection of descriptive (non-behavioural) declarations about the types (e.g., employees and locations), and instances of entities present in an environment (e.g., bob and r01), and relations between instances, such as bob being entitled to access r01 (hasBadge(bob, r01)).
Definition 3.1 (Context Description). A context description C is a tuple \((Y, I, Y, K)\) where Y is a set of types, I is a set of instances, \(Y : I \rightarrow Y\) is a function that assigns an instance in I to its type in Y, and K is a set of context relations over instances in I, such that for every \(k \in K, k \subseteq I_1 \times \ldots \times I_n\).
We denote the universe of context relations as \(K\). A context relation literal is an expression of the form \(k \text{ or } \neg k\) for some \(k \in K\). Given a set of context relation literals KL, we write \(\alpha(KL)\) to denote the set of unique context relations in KL.
Returning to our running example, a context description in this case includes types such as \(\text{Emp}\) and \(\text{Comp}\), instances including bob and m1 with assignments \(\gamma(\text{bob}) = \text{Emp}\) and \(\gamma(m1) = \text{Comp}\), and context relations such as isLocatedIn(m1, r01), meaning that computer m1 is placed in location r01, and isStoredIn(doc, m1), meaning that doc is stored in m1.
A behavioural description B specifies the events that may occur within an environment. In a digital investigation setting, these may be at different levels of abstraction. Similar to [8, 11], we distinguish between two types of events to represent these levels of abstraction: primitive and complex. A primitive event represents the occurrence of an atomic action that can be observed by an investigator from a digital device (e.g., using a hard drive analysis tools). An example of primitive event can be swipe_card(alice,nfc) indicating the nfc reading alice’s card tag. A complex event indicates the execution of complex human activities and can involve one or more primitive events, other complex events and contextual conditions. For example, a complex event indicating alice entering room r01 involves the following primitive events: alice’s card tag being read by the nfc reader and her entrance in r01 being recorded by the cctv. These events can happen at the same time or in any order. The complex event also involves the following contextual conditions: alice is not inside r01 and does not possess a badge to access r01. This is expressed through a composite definition.
Definition 3.2 (Composite Definition). Let \(\mathcal{A}^p\) and \(\mathcal{A}^c\) be the universe of primitive and complex events respectively. A composite definition \(I\) is a tuple \((\mathcal{A}^p, \mathcal{A}^c, KL, L \subseteq Y, \lambda)\) where \(\mathcal{A}^p \subseteq \mathcal{A}^c\) and is a finite set of time-labels, \(L \subseteq LKL\) is a partial order relation over L (that is reflexive, anti-symmetric and transitive) and \(\lambda : L \rightarrow \mathcal{P}(\mathcal{A}^p \cup \mathcal{A}^c \cup KL)\) is a labelling function.
Let D be a set of composite definitions. We define a relation \(\alpha \subseteq \mathcal{A}^c \times D\) to associate a composite definition \(I \in D\) with the complex event \(e \in \mathcal{A}^c\) if it defines. In our example, the complex event enter(alice, r01) may be defined as enter(alice, r01) \(\leftarrow\) { swipe_card(alice, nfc), cctv_access(alice, r01, cctv1) }, 0, [¬ in(alice, r01), hasBadge(alice, r01)], \{l1, l2\}, 0, [l1 \rightarrow { swipe_card(alice, nfc), ¬ in(alice, r01), hasBadge(alice, r01) }, l2 \rightarrow {cctv_access(alice, r01, cctv1), ¬ in(alice, r01))].
For our example, to trigger complex event enter(alice, r01), primitive events swipe_card(alice, nfc) and cctv_access(alice, r01, cctv1) can occur in any order; however, both have to occur. When swipe_card(alice, nfc) and cctv_access(alice, r01, cctv1) occur alice should not be in r01. Moreover alice should be authorised to access r01 (hasBadge) when swipe_card(alice, nfc) occurs. Moreover, complex event mount(usb1, m1) event occurs when the system log in m1 records the mounting of a storage device (primitive event sys_mount(usb1, m1)) while alice or bob are logged to m1 (context condition logged(e, m1)). This is defined as mount(usb1, m1) \(\leftarrow\) { sys_mount(usb1, m1) }, 0, [logged(e, m1)], \{l1\}, 0, [l1 \rightarrow { sys_mount(usb1, m1), logged(e, m1))].
A behavioural description includes the composite definitions associated with the complex events that can occur in the environment. A behavioural description is formally defined as follows.
Definition 3.3 (Behavioural Description). A behavioural description B is a tuple \((\mathcal{A}^p, \mathcal{A}^c, K, D, \alpha)\) such that for every \(e \in \mathcal{A}^c, d \in (L_d, \leq_d, \lambda_d, \mathcal{A}^p_d, KL_d) \in D\), \(\mathcal{A}^p_d \subseteq \mathcal{A}^p\), \(\mathcal{A}^c_d \subseteq \mathcal{A}^c\) and \(\alpha(KL_d) \subseteq K\).
Complex events are expected to interact and bring about changes to the context in which they occur. To capture this effect, we adopt notions of fluents for event-driven systems [22, 31]. Given the set K of context relations, each \(k \in K\) is defined by two disjoint sets of complex events from \(\mathcal{A}^c\) (called initiating and terminating events, respectively) and an initial value (true or false), written according to the following schema: \(k \equiv \{\text{in}_k, \text{TR}_k, \text{init}_k\}\) such that \(\text{in}_k \cap \text{TR}_k = \emptyset\) and \(\text{in}_k \cup \text{TR}_k \subseteq \mathcal{A}^c\). The set of associations of this form are called interaction definitions, and are denoted I. In our running example the interaction definition of context relation in(alice, r01) is defined by \(\langle\text{enter(alice, r01)}, \{\text{exit(alice, r01)}\}\rangle\), false). This interaction indicates that context relation in(alice, r01) is initially false; it is initiated by complex event enter(alice, r01) and terminated by complex event exit(alice, r01).
From C, B and I we define an environment description.
Definition 3.4 (Environment Description). An environment description \(E\) is a tuple \((C, B, I)\) where \(C = (Y, I, Y, K)\) is a context description, \(B = (\mathcal{A}^p, \mathcal{A}^c, K, D, \alpha)\) a behavioural description and I is a set of context relation definitions such that \(\forall k \in K : k \equiv \{\text{in}_k, \text{TR}_k, \text{init}_k\}\) in I. k \(\in K\) and \(\text{in}_k \cup \text{TR}_k \subseteq \mathcal{A}^c\).
3.2 Histories
A history is a sequence of (concurrent) events that captures the evolution of an environment in which the digital devices and evidence sources operate [11]. It is potential if it refers to at least one event that has been speculated and actual if all the events have been observed from digital sources within the environment. In this paper, we focus on potential histories for defining preservation requirements.
A history may describe events at various levels. It is called a primitive (resp. complex) history, denoted σ (resp. ω), if all the events that appear in it are primitive (resp. complex). We write ω ≡ ce_1...ce_n to denote a complex history where ce_i is the set of complex events occurring concurrently at position i, and similarly for a primitive history σ.
An environment description E is interpreted over a sequence of primitive and complex events (referred to as a hybrid history ♠). Its satisfaction is determined with respect to the satisfaction of complex events’ composite definitions in ♠ w.r.t. to I.
For the satisfaction of an event’s composite definition, we consider the notion of a ‘narration’ (a total order over the partial order given in a complex event’s definition). For a narration to be constructed, each context event appearing in a definition is refined until all complex events are reduced to their primitive events and context relation literals. The result of this refinement procedure applied to definition d is a set of composite definitions δ(d). ²
A narration of d is captured with respect to one of the elements in δ(d). We will use the notation v|AP (reps. v|AC) to denote the projection of v over primitive (reps. complex) events in AP (resp. AC).
Definition 3.5 (Narration of Composite Definition). Let ♠ = (AP, AC, K, D, σ) be a behavioural description and ♠ = (Ld, ≺d, λd, δd, Ld, Kld) a composite definition in D. Let δ(d) be the set of definitions obtained refining d. A narration of d is a hybrid history σ = he_1...he_m if there exists a d’ ∈ δ(d) and a total order l_1 < ... < l_n over Ld such that:
- for all l_i, l_j ∈ Ld, if l_i < l_j then a < b (where 1 ≤ a, b ≤ m),
- λ_d(l_i)|AP = (he_a)|AP
- λ_d(l_i)|AC = (he_b)|AC,
where λ(l)|AP and λ(l)|AC denote the set of primitive events and complex events respectively assigned to time-label l.
For instance the following are three example narrations for enter(alice, r01)’s composite definition:
v_1 = { {swipe_card(alice, nfc), cctv_access(alice, r01, cctv1), enter(alice, r01))}
v_2 = { {swipe_card(alice, nfc)),
{cctv_access(alice, r01, cctv1), enter(alice, r01))}_2
v_3 = { {cctv_access(alice, r01, cctv1),
{swipe_card(alice, nfc), enter(alice, r01))}_3
Interaction descriptions are interpreted over complex histories. Given k ≡ {N_k, TR_k, init_k} ∈ I, k is true at position b in a complex history ω = ce_1...ce_n if either the following holds:
- init_k ∧ ∀a ∈ N_e TR_k ∈ TR_k.(0 < a < b) → e TR_k ∈ ce_a;
- ∃a ∈ N_e (a < b) ∧ (e TR_k ∈ ce_a) ∨ ∀g ∈ N_e TR_k ∈ TR_k.(0 < a < b) → e TR_k ∈ ce_a;
- otherwise it is said to be false. We assume histories in which terminating and initiating events for a context relation do not occur concurrently. We define below satisfaction of a complex event.
Definition 3.6 (Complex Event Satisfaction). Given an environment description E = (B, C, I), a composite definition d associated with complex event e (e ∈ B) and a hybrid history ♠, v is said to satisfy e ♠ with respect to E if for every decomposition v = xyz, if y = h_xa,..., h_ya,..., h_yb is a narration of ♠d with respect to ♠d and order l_1 < ... < l_n then:
- e ∈ (he_b)|AC
- if kl ∈ λ_d(l_i)|KL then v|AC_γ.g = kl
where λ(l)|KL denotes the set of context relation literals assigned to time-label l.
The environment description E is said to be satisfied in a hybrid history if every complex event in A^* is satisfied in that history. We write Y(E) to denote the set of hybrid histories that satisfy E.
Figure 2 shows a hybrid history satisfying E of our example. We project the primitive and complex events composing the hybrid history onto primitive and complex histories, respectively. The primitive history represents the case in which (1) the nfc reads alice’s card tag, (2) the cctv records alice passing through the door of r01, and m logs (3) the login performed by user bob (4) the mount of storage device usb1, (5) the copy od the doc by user bob, and (6) the unmount of usb1. If a portion of the primitive history represents a narration of a composite definition associated with a complex event, such event is assumed to occur. For example, the sequence of primitive events at time 1 and 2 corresponds to narration v_2 of the composite definition of enter(alice,r01).
While the primitive event at time 4 represents a narration for the composite definition of event mount(usb1, m1). On top of Figure 2 we indicate the context relations that hold at each time instant and omit those that are not satisfied, e.g., mounted(usb1,m1) starts holding when complex event mount(usb1,m1) occurs and stops holding when complex event unmount(usb1,m1) happens.
3.3 Hypotheses
The term hypothesis in a digital investigation is a conjecture that may refer, for instance, to past events in the lifetime of digital devices [52]. In this paper, we focus on one type of hypothesis.
relevant to developing forensic-ready systems, the environment construction hypothesis. This form of hypothesis postulates about the feasibility of events occurrence and presence of contextual conditions of interests within the environment. It may be captured as an event’s composite definition \( h \circ d \), where \( h \) is a complex event marking the satisfaction of a hypothesis, and with \( A^c \) in \( d \) being empty and \( A^c \) and \( KL \) containing only complex events and context relation literals respectively. The events and the contextual conditions expressed in the hypothesis represent how an incident may occur within the environment. The incident of our example refers to the unauthorised extraction of the sensitive document doc. One way in which the doc can be extracted is because an unauthorised copy to an external storage device was performed. This hypothesis is defined as
\[
\text{IllegalCopy} \triangleq \left\{ (h, \{\text{copy}(bob, doc, m1)\}, \{t_i\}, h, t_i \rightarrow \{\text{mounted}(usb1, m1)\}) \right\}
\]
In other words, a copy of the document is performed while an external storage device (usb1) is mounted on m1.
Hypotheses are interpreted over finite complex histories. Their satisfaction is given by the definition below.
**Definition 3.7 (Hypotheses Satisfaction).** A hypothesis \( h \) (with definition \( h \circ d \)) is said to be satisfied in a complex history \( \omega \) at position \( b \), i.e., \( \omega, b \models h \), if there exists a decomposition \( \omega = xyz \) such that \( y = x_1, \ldots, x_n \) is a narration of \( d \) with respect to \( \omega' \circ d' \) and \( \omega' \prec \omega \) and if \( \omega \) is such that \( \omega' \in (\omega')_{KL} \) then \( \omega \models h \).
We distinguish between supportable and refutable hypotheses in environment \( E \).
**Definition 3.8 (Hypotheses Supportability and Refutability).** Let \( \Upsilon(\mathcal{E}) \) be the set of hybrid histories satisfying \( E \). A hypothesis \( h \) (with definition \( h \circ d \)) is said to be supportable in \( E \) if there exists a hybrid history \( v \in \Upsilon(\mathcal{E}) \) such that for some \( b, v|_{\mathcal{AR}^c}, b \models h \). It is said to be refutable if there exists a history \( v \in \Upsilon(\mathcal{E}) \) such that for all \( b, v|_{\mathcal{AR}^c}, b \not\models h \).
We sometime abstract away from the position \( b \) and write \( v|_{\mathcal{AR}^c} \models h \) for a history satisfying \( h \) at some point \( b \). We will denote the set of hybrid histories in \( \Upsilon(\mathcal{E}) \) supporting at least one hypothesis in \( H \) as \( \Upsilon^+(\mathcal{E}) \) and those refuting every hypothesis in \( H \) as \( \Upsilon^-(\mathcal{E}) \). Returning to our example, the IllegalCopy hypothesis is supportable in our example environment \( \mathcal{E} \) since there exists a decomposition of a hybrid history (see Figure 2) satisfying \( \mathcal{E} \), that yields a narration of the definition of the IllegalCopy hypothesis, i.e.,
\[
x = \{\text{swipe_card}(alice, nfc)\}, \{\text{cctv_access}(alice, r01, cctv1), \text{enter}(alice, r01)\},\{\text{sys_login}(bob, m1)\}, \{\text{sys_mount}(usb1, m1)\}, \{\text{sys_copy}(bob, doc, cctv1)\}, \{\text{sys_copy}(bob, doc, m1)\}
\]
such that \( v|_{\mathcal{AR}^c}, 5 \models \{\text{mounted}(usb1, m1)\} \).
As we will see later in Section 4, we are interested in minimal hybrid histories that satisfy a hypothesis. We define minimality of histories with respect to hypotheses as follows.
**Definition 3.9.** [Minimally Supportive Histories] Let \( \Upsilon(\mathcal{E}) \) be a set of hybrid histories satisfying \( \mathcal{E} \) and \( h \) be a hypothesis supportable by \( \mathcal{E} \). The hybrid history \( v = h e_1, ..., h e_m \in \Upsilon^{+}(\mathcal{E}) \) is said to be minimally supportive of \( h \) in \( \mathcal{E} \) iff the history \( v' \) obtained by removing any primitive event \( a \in A^P \) from \( (he_i)_{AP} \) of \( v \) is in \( \Upsilon(\mathcal{E}) \) and no longer supports \( h \).
For instance, the hybrid history in Figure 2 is not minimally supportive of the hypothesis IllegalCopy since the history obtained by removing \( \text{sys_unmount}(usb1, m1) \) from \( v_4 \) still satisfies IllegalCopy. We sometimes write \( \text{min} \left( v, h \right) \) (resp. \( \text{min} \left( \mathcal{E}, h \right) \)) as a shorthand for the minimally supportive history \( v' \) (resp. histories) of \( h \) obtained from \( v \).
## 4 PRESERVATION SPECIFICATIONS
We are concerned with deriving specifications PS for a forensic readiness controller comprising domain pre- and post-conditions as well as required pre- and trigger-conditions, expressed in LTL. These conditions control the execution of operations of the form \( \text{preserve} \ (a, ts) \) where \( a \) indicates the occurrence of a primitive event in the environment, and \( ts \) marks the time-stamp (from the system clock) at which the occurrence was observed by the FR controller. We consider \( ts \) to be an abstraction over real-time clock variables that may be obtained following techniques such as [13, 28]. The generation of such abstractions is outside the scope of the paper. We assume an ordered set of timestamps to be isomorphic to the set of natural numbers.
The domain pre-condition of operation \( \text{preserve} \ (a, ts) \) specifies that this operation cannot take place if the occurrence of event \( a \) at \( ts \) has already been preserved. The domain post-condition specifies that operation \( \text{preserve} \ (a, ts) \) ensures preservation of the occurrence of event \( a \) at \( ts \) in the next time instant. We assume these as given for each operation. We Assertions 1 and 2 specify, respectively, the domain pre- and post-conditions of operation \( \text{preserve} \ (\text{sys_copy}(e, d, m), ts) \)
\[
\begin{align*}
\text{Vts} : &\quad \text{Timestamp}, \quad e : \text{Emp}, \quad d : \text{Doc}, \quad m : \text{Comp} \\
\text{G} &\left(\text{preserved}(\text{sys_copy}(e, d, m), ts) \rightarrow \neg \text{preserve}(\text{sys_copy}(e, d, m), ts)\right) \\
\text{G} &\left(\text{preserve}(\text{sys_copy}(e, d, m), ts) \rightarrow \neg \text{xpreserve}(\text{sys_copy}(e, d, m), ts)\right)
\end{align*}
\]
Required pre-conditions are assertions that condition the execution of \( \text{preserve} \ (a, ts) \) upon having received notification about the occurrence of a primitive event in the environment, \( \text{receive} \ (a, ts) \). Required trigger-conditions are conditions upon the (non-)preservation of other primitive events. An example of a preservation specification of operation \( \text{preserve} \ (\text{sys_copy}(e, d, m), ts) \) is
\[
\begin{align*}
\text{Vts} : &\quad \text{Timestamp}, \quad e : \text{Emp}, \quad d : \text{Doc}, \quad m : \text{Comp} \\
\text{G} &\left(\neg \text{received}(\text{sys_copy}(e, d, m), ts) \rightarrow \neg \text{xitalic}{\text{preserve}(\text{sys_copy}(e, d, m), ts)}\right) \\
\end{align*}
\]
\[
\begin{align*}
\text{Vts} : &\quad \text{Timestamp}, \quad e : \text{Emp}, \quad d : \text{Doc}, \quad m : \text{Comp} \\
\text{G} &\left(\text{preserved}(\text{sys_login}(e, d, m), ts) \land \text{preserved}(\text{sys_mount}(e, d, m), ts) \land \text{preserved}(\text{sys_copy}(e, d, m), ts) \land \neg \text{received}(\text{sys_copy}(e, d, m), ts) \rightarrow \neg \text{xitalic}{\text{preserve}(\text{sys_copy}(e, d, m), ts)}\right)
\end{align*}
\]
Assertion 3 specifies the required pre-condition, i.e., receiving notification of the occurrence of \( \text{sys_copy}(e, d, m) \). Assertion 4 expresses
a trigger-condition forcing the FR controller to preserve occurrence of \( \text{sys\_copy}(e, d, m) \) if it has already preserved information about an employee’s logging onto a computer and the mounting of a storage device on that computer, but no subsequent occurrence about the employee logging out or unmounting of the storage device is recorded. The preservation specification \( PS \) defines a FR controller’s storage capacities as a set of executable sequences of preserve operations of the form
\[
\pi = \{ \text{preserve}(a^1, t_j), \ldots, \text{preserve}(a^k, t_j) \}, \ldots, \{ \text{preserve}(a^m, t_m), \ldots, \text{preserve}(a^{m'}, t_m) \}_{m}
\]
We say that \( \pi \) is a potential log if \( \pi \) satisfies \( PS \) (according to standard trace semantics of LTL) and for each \( \text{preserve}(a^j, t_j) \in \pi(j) \), its required trigger-condition is non-vacuously satisfied in \( \pi \) at position \( j \). The notation \( \pi(j) \) indicates the set of operations that occur at position \( j \).
We restrict our definition of preservation specifications to those that ensure the minimality and relevance of all potential logs to hypotheses under consideration. Such a specification is referred to as forensic-ready. To express forensic-ready preservation specifications, we first consider the notion of a specification covering potential histories.
**Definition 4.1.** (Specification Coverage) Let \( PS \) be a preservation specification and \( n = (h_1, \ldots, h_n) \) a history. Then \( PS \) is said to cover \( n \) iff there exists a potential log \( \pi = \{ f_1, \ldots, f_n \} \in \Pi(PS) \) isomorphic to \( v[\pi]\), i.e., for every primitive event: \( a \in \langle (h_e) \rangle_{(\pi)} \), \( \text{preserve}(a, t_i) \in f_i \).
The isomorphism with respect to potential histories guarantees the preservation of events related to an incident. Furthermore, since the isomorphism is defined with respect to minimally supportive histories of hypotheses in \( \mathcal{H} \) this ensures minimality of preserved event occurrences. It also, together with the requirement for hypotheses \( \mathcal{H} \) to be refutable by potential histories of \( \mathcal{E} \) MINUS (\( \mathcal{E} \)), supports relevance of events stored through preserve operations.
**Definition 4.2.** (Forensic-ready Specification) Let \( \mathcal{E} \) be an environment description and \( \mathcal{H} \) a hypothesis that is both supportable and refutable in \( \mathcal{E} \) by \( \mathcal{E}^{+}(\mathcal{E}) \) and \( \mathcal{E}^{-}(\mathcal{E}) \) respectively. Let \( PS \) be a preservation specification. Then \( PS \) is said to be forensic-ready with respect to \( \mathcal{H} \) in \( \mathcal{E} \) iff \( PS \) covers every history in \( \mathcal{E}^{-}(\mathcal{E}) \), \( h \) for every \( h \in \mathcal{H} \) and does not cover any \( \mathcal{E}^{+}(\mathcal{E}) \).
Any FR controller whose specification is forensic-ready with respect to \( \mathcal{H} \) in \( \mathcal{E} \) is sufficient to guarantee evidence preservation requirements of relevance and minimality.
## 5 SPECIFICATION GENERATION
Based on our formulation above, we propose a systematic approach (Figure 3) for generating forensic-ready preservation specifications. Our approach takes as input an environment description \( \mathcal{E} \), a set of speculative incident hypotheses \( \mathcal{H} \), elicited, for instance, by a domain expert, and an initial preservation specification \( PS \) written in LTL, which contains domain pre- and post-conditions of preservation operations. We assume that the description of the environment is correct and the speculative hypotheses of concern are known at design-time. The approach provides as output either: (i) a confirmation that (some) hypotheses are not supportable in the environment; (ii) a confirmation that the FR controller does not have the capabilities to ensure the forensic-readiness of its preservation specification; or (iii) a preservation specification that is guaranteed to be forensic-ready with respect to \( \mathcal{H} \) in \( \mathcal{E} \). The approach comprises three phases as described below.
1. **History Generation.** In this phase, we search for hybrid histories \( \mathcal{E}^{+}(\mathcal{E}) \) and \( \mathcal{E}^{-}(\mathcal{E}) \) that minimally support and that refute \( \mathcal{H} \), respectively. The existence of histories in \( \mathcal{E}^{+}(\mathcal{E}) \) ensures that the hypotheses of interest are feasible within the intended environment. If \( \mathcal{E}^{+}(\mathcal{E}) \) is empty, this means that either the hypothesis cannot occur within the environment described, and thus it will not require to be considered during a digital investigation, or that the environment description and/or the speculative hypotheses are incorrect and need revision. The histories \( \mathcal{E}^{-}(\mathcal{E}) \) operate as a proxy for the synthesis phase to ensure only relevant event occurrences are preserved.
2. **Specification Verification.** Given the generated \( \mathcal{E}^{+}(\mathcal{E}) \), we check if each history is potentially covered by the preservation specification, i.e., there exists a corresponding potential log in \( \Pi(PS) \). If some history in \( \mathcal{E}^{-}(\mathcal{E}) \) is not, then this may be owing to one of two cases: (i) the FR controller and the digital devices do not have the capabilities needed to, respectively, preserve the potential logs and monitor the relevant events; or (ii) they do but require an operational preservation specification to be synthesised to ensure their preservation. In the case of the former, the approach terminates, indicating a need for additional capabilities. In the latter case, corresponding potential logs \( \{ \pi^+_i \} \) and \( \{ \pi^-_j \} \) are produced and passed onto the third phase.
3. **Specification Synthesis.** The synthesis phase considers \( \mathcal{E}, \mathcal{H}, PS, \{ \pi^+_i \} \) and \( \{ \pi^-_j \} \). It searches, within a space of candidate expressions restricted to safety LTL expressions, a new set of required pre- and trigger-conditions that would prescribe the preservation of all potential logs \( \{ \pi^+_i \} \) but not in \( \{ \pi^-_j \} \). These are added to \( PS \). The new specification \( PS' \) is given as provided to the FR controller that is responsible for its enactment. Note the steps above are conducted for a set of given hypotheses. If new ones are provided, then a new specification must be generated.
## 6 TOOL IMPLEMENTATION
As a proof of concept, we have implemented a prototype tool\(^\text{3}\) for synthesising forensic-ready preservation specifications. Our solution uses a (i) declarative language based on the Event Calculus (EC) logic program [31] to represent and reason about the environment.
\[^{3}\text{The source code of the tool is publicly available at https://github.com/pasquale/KEEPER/tree/keeper_CLI}\]
descriptions, speculative hypotheses, preservation specifications and potential histories and logs in a uniform way, (ii) an off-the-shelf Boolean constraint solver for logic programs, called clingo [20], to compute potential histories and logs that satisfy or refute hypotheses, and (iii) a logic-based learner, called XHAIL [41], to synthesize preservation specifications that cover all histories supportive of a hypothesis. Our choice of EC logic program as a language is due to its successful deployment in the context of requirements operationalisation [4, 5] and reasoning about evidence in digital investigations [56]. The encoding of the input as well as the execution of the three phases in Section 5 are done automatically. The user is expected to provide the initial input, and the maximum length of the potential histories to be considered in the approach. Our use of the solver as the underlying history generation and specification verification engine is motivated by its capacity to handle difficult (NP-hard) search problems for EC programs. For encoding of preservation specifications, we follow the translation in [5]. Details of the encoding for the environment description and hypotheses are available at https://github.com/lpasquale/KEEPER/tree/keeper_CLI/RunningExample. Although the approach is demonstrated for a particular language, the principles behind it could be applied to other formalisms and solvers.
In brief, the history generation phase first tries to find two sets of models of the program \(L_E \cup L_H\). Each element in the first set is a model of \(L_E \cup L_H\) that comprises a history \(L_{\psi^{+}}\) of maximum length \(n\) that is minimally supportive of a hypothesis in \(H\). Each element in the second set contains a history \(L_{\psi^{-}}\) of length \(n\) that refutes all hypotheses in \(H\). This is done by solving a constraint that requires a hypothesis \(L_{h} \in L_{H}\) to be satisfied by at least one potential history consistent with \(L_E \cup L_{h}\). The solver searches for the optimal solution being defined as the fewest event occurrence in a history which equates to the minimally supportive history of \(h\). We denote the set of minimally supportive histories as \(L^{+}(\psi)\) and the minimally refuting histories as \(L^{-}(\psi)\).
The specification verification phase considers the histories \(L^{+}(\psi)\) and \(L^{-}(\psi)\), program \(L_E \cup L_H\) and program \(L_{PS}\). This phase performs several calls to the solver to check for consistency of each history \(L^{+}(\psi)\) and \(L^{-}(\psi)\) respectively with the specification \(L_{PS}\) (phase 2). The solver searches for models that satisfy the program \(L_E \cup L_H \cup L_{PS} \cup L_{\psi^{+}}\) (for each \(L_{\psi^{+}} \in L^{+}(\psi)\) and \(L_{\psi^{-}} \in L^{-}(\psi)\) and a constraint requiring there to be an isomorphic potential log \(L_{\pi^{+}}\) in the model. If a potential log for a supportive history cannot be found, then the program is unsatisfiable. In this case the approach outputs those potential histories to the user for further consideration (e.g., amending the FR controller’s capabilities).
Otherwise all computed potential logs \(L^{+}(\psi)\) and \(L^{-}(\psi)\) that correspond to histories in \(L^{+}(\psi)\) and \(L^{-}(\psi)\) respectively and the program \(L_E \cup L_{PS}\) are passed to a logic-based learner. The aim of the learner is to search through a candidate space (given by a language bias in [5]) to compute required pre-conditions \(L_{ReqPre}\) and required trigger-conditions \(L_{ReqTrig}\) such that \(L_E \cup L_{PS} \models L^{+}(\psi)\) where \(\models\) is an entailment operator defined with respect to stable model semantics [21], whilst ensuring that \(L_E \cup L_{PS} \not\models L_{\psi^{-}}\) for any \(L_{\psi^{-}} \in L^{-}(\psi)\). The programs \(L_{ReqPre}\) and \(L_{ReqTrig}\) are then translated back to LTL following the method described in [5].
7 EVALUATION
Our evaluation aims to assess whether the synthesized preservation specification prescribes to preserve i) relevant events and ii) the minimal amount of events necessary to support speculative hypotheses of potential incidents. To achieve this aim, we apply our prototype tool to two case studies publicly available for research and training purposes. Each case study comprises data that would normally be available to an investigator for examination (when a FR controller is not implemented). The investigator can use this data to explain, if possible, how a particular incident occurred. For each incident, we manually modelled the environment and the speculative hypotheses in EC. From this, we used our tool to generate preservation specifications automatically. The EC models of the case studies and the generated specifications are available online.
We compare the events that the generated specification prescribes to preserve with those that would be available to an investigator when the system that does not satisfy evidence preservation requirements, i.e., the information available to an investigator is represented by the data-sets provided with the case studies. To assess relevance, we verify whether our specification prescribes to preserve events that were relevant to satisfy the speculative hypotheses. We also check if occurrence of those events can be inferred from the available data-set. To assess minimality we verify that our approach prescribes preserving fewer events. In particular, we compare the number of events that our approach prescribes to preserve with those that can be inferred from the data-set. We also measure the number of events, whose occurrence can be inferred from the data-set, which were irrelevant to support the satisfaction of the hypotheses.
7.1 Relevance and Minimality
The first incident scenario we considered is set in a university, where students and academic staff can send emails by using the university and students’ residence internal network. The model of the environment includes different agents who can be academics or students, and can teach or attend courses, respectively. It also includes locations, such as university and students residences, routers (each of them placed in a location), emails and their corresponding sender/recipient email and IP addresses. We also model whether an email address is a university address for staff and students or it is an external address.
The primitive events we model cover events whose occurrence can be inferred from the data-set. This includes the TCP packets captured from the routers located inside the students’ residence. Therefore, we consider the routers as digital sources within the environment and use primitive events to represent network data streams. We model the following primitive events: IMAP/POP network traffic (we indicate primitive events related to emails sent from external addresses to an academic as SUE); incoming HTTP traffic (we indicate HTTP messages used to set-up a cookie as SC); general outgoing HTTP traffic (EM) and specific outgoing HTTP traffic towards anonymous email services (SAE). Some of the complex events we model include: (i) emails received by a specific email address (ii) cookie setting from an external address to an IP; (iii)
sensing of HTTP messages from an IP address and a browser agent; (iv) sending of anonymous emails from an IP address and a browser agent. The complex event indicating setting of a cookie initiates the state cookieSet for a specified email and IP address.
An incident of concern is related to the receipt of harassment emails by academics. The following speculative hypotheses were constructed: h1: an email is sent to an academic by someone using an external address; h2: an anonymous email is sent by an individual who can be identified for example through the cookie and his/her browser agents; h3: an anonymous email is sent by an individual who cannot be identified. h1 is satisfied when complex event (i) takes place for which the sender email address is external and the recipient email address is owned by a university staff member. For supporting h1, the implemented specification recommended preserving all incoming IMAP/POP network traffic (SUE) related to emails sent from external addresses to an academic. h2 is satisfied when a cookie is set for a specific email and IP address, complex event (iii) takes place for which HTTP traffic originates from the same IP address with which the cookie is associated, and subsequently complex event (iv) takes places for which the IP address and the browser agent have been previously associated with outgoing HTTP traffic. For h2, the specification required preserving (a) incoming HTTP traffic adopted to set-up a cookie (SC), (b) outgoing HTTP traffic from the same address to which the cookie was set (EM) and (c) outgoing HTTP traffic to send anonymous emails (SAE). h3 is satisfied when complex event (iv) takes place. Therefore, for h3 the specification requires preserving all SAE events.
Table 1 shows the total time necessary to generate a specification for each hypothesis, and the time required by each phase of the approach: histories generation (HG), specification verification (SV) and specification synthesis (SS). For each hypothesis, the number of supporting histories (out of the total number generated) and negative histories necessary to compute a specification, including the maximum length of the histories, are shown. A higher number of positive and negative histories could have been given as input to the synthesis activity without affecting the generated specification. The maximum time was taken for the most complex hypothesis (h2) which also required the provision of negative histories.
**Table 1: Performance in the harassment case study.**
<table>
<thead>
<tr>
<th>Instances</th>
<th># Pos</th>
<th># Neg</th>
<th>Length</th>
<th>HG</th>
<th>SV</th>
<th>SS</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>h1</td>
<td>1 / 4</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0.01</td>
<td>0.23</td>
<td>0.24</td>
</tr>
<tr>
<td>h2</td>
<td>1 / 32</td>
<td>4</td>
<td>3</td>
<td>0.08</td>
<td>0.19</td>
<td>39.915</td>
<td>40.183</td>
</tr>
<tr>
<td>h3</td>
<td>1 / 8</td>
<td>0</td>
<td>1</td>
<td>0.01</td>
<td>0.03</td>
<td>0.301</td>
<td>0.341</td>
</tr>
</tbody>
</table>
We implemented the specification of a FR controller able to extract data stream from the data-set. Extracted data streams can support h2 since an incoming set-cookie message associated with jcoach@gmail.com and received by IP 192.168.015.004 was preserved. Outgoing HTTP messages from the same IP address and associated with a Mozilla browser have also been recorded; the same browser appears to have been used to send the anonymous email. This supports our theory that our approach would preserve data that might represent relevant evidence if such an incident were to occur. This would support investigators in prioritising their efforts, while ensuring that other events related to alternative scenarios would have been preserved if such scenarios occurred. Moreover, our approach also preserves to support events that might not be proactively retained by digital sources. For example, the data-set does not include events about IMAP/POP network traffic (SUE) necessary to support hypothesis h1. This might be due to the fact that network traffic was collected for a limited amount of time and was not retained.
To assess minimality we compare the total number of events whose occurrence can be inferred from the data-set with those that our specification would prescribe to preserve. The full data-set includes 577,760 data streams (application level messages) exchanged in 15,508 communications between different IP addresses. The number of data streams corresponds to the total number of events that an investigator would normally have to examine. The number and type of event that our specification prescribed to preserve for each hypothesis is shown in Table 2; the total number of events is only 0.71% of the data streams in the entire data-set. Moreover, not all the events preserved were necessary to support the hypotheses. For our scenario, only 956 data streams corresponding to HTTP traffic originating from the Mozilla browser were necessary to support h2. Therefore, although our specification consistently reduces the amount of data to be analysed by an investigator, it does not completely ensure the minimality requirement since 2874 (69%) data streams were not relevant to support h2.
**Table 2: Number of events preserved.**
<table>
<thead>
<tr>
<th># Events</th>
<th>SUE</th>
<th>SC</th>
<th>EM</th>
<th>SAE</th>
</tr>
</thead>
<tbody>
<tr>
<td>h1</td>
<td>0</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>h2</td>
<td>2</td>
<td>769</td>
<td>300</td>
<td></td>
</tr>
<tr>
<td>h3</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Total</td>
<td>4132</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
We also applied our approach to a more complex corporate exfiltration scenario. The model of the environment includes the company’s employees, their email addresses, computers, employees’ access rights to computers, storage devices that could be mounted and programs that are installed on each computer. The available data-set consists of an image of the hard drive of a Windows machine. Thus, primitive events we modelled represent changes in the file system of a Windows machine that can be observed from a hard drive image. These include users’ logins, mount and unmount of devices, installation of programs, and sent or received emails. Owing to space, we will not provide details of the model and the specification generated for all the hypotheses of this example and refer the reader to the project webpage.
An incident of concern is related to the exfiltration of a confidential document of a company from the computer of the chief financial officer (cfo). Six hypotheses were constructed for this incident. Examples of these hypotheses are: h2: the document is sent via email to an external email address and h3: the document is copied while an external storage device is mounted. To support h2 the specification requires preserving user logins to a computer in which the document is stored and, while a user is logged, sending of emails to a non-corporate address including the confidential document attached. As h5 is equal to the hypothesis of our running example, it lead to the same preservation specification. For this
---
We refer to the Digital Corpora corpora scenarios at [http://digitalcorpora.org/corpora/scenarios/m57-jean](http://digitalcorpora.org/corpora/scenarios/m57-jean) for more information.
incident scenario the hypotheses we modelled were more complex and a higher number of supporting and refuting histories were generated. This increased the time the approach took to learn a specification. Table 3 shows the time to generate a preservation specification for each hypothesis.
We manually acquired the events identified from the computer hard drive using Autopsy [10]. These were not sufficient to support any of the hypotheses because some of the events that our approach prescribes to preserve are not retained in a computer hard drive image. For example, we cannot support hypothesis $h_2$ speculating that the document might be sent as an email attachment by an employee to a non-corporate email address. In particular, although an event in the data-set indicates that an email with the attached confidential document was sent from the cfo’s email address (jean@m57.biz) to an external address (tuckgorge@gmail.com), we cannot conclude which user was logged on the machine since the data-set only provides information about the last user login. A similar situation arise with hypothesis $h_5$ speculating that the document may be copied to an external device, because mounting of a storage device and copy of a file are events that are not retained. If our specification was implemented it would have ensured preservation of events necessary to support $h_2$ and $h_5$ when they occurred.
### Table 3: Performance in the exfiltration case study.
<table>
<thead>
<tr>
<th>Instances</th>
<th>Execution time (s)</th>
<th>#Pos</th>
<th>#Neg</th>
<th>Length</th>
<th>H1</th>
<th>SV</th>
<th>St</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>h1</td>
<td></td>
<td>2 / 12</td>
<td>2</td>
<td>2</td>
<td>0.01</td>
<td>0.05</td>
<td>7756</td>
<td>7.816</td>
</tr>
<tr>
<td>h2</td>
<td></td>
<td>1 / 4</td>
<td>1</td>
<td>2</td>
<td>0.01</td>
<td>0.05</td>
<td>1852</td>
<td>1.912</td>
</tr>
<tr>
<td>h3</td>
<td></td>
<td>4 / 18</td>
<td>9</td>
<td>3</td>
<td>0.5</td>
<td>0.18</td>
<td>894.197</td>
<td>894.337</td>
</tr>
<tr>
<td>h4</td>
<td></td>
<td>4 / 18</td>
<td>9</td>
<td>3</td>
<td>0.5</td>
<td>0.18</td>
<td>894.197</td>
<td>894.337</td>
</tr>
<tr>
<td>h5</td>
<td></td>
<td>1</td>
<td>4</td>
<td>3</td>
<td>0.05</td>
<td>0.2</td>
<td>43.851</td>
<td>44.101</td>
</tr>
<tr>
<td>h6</td>
<td></td>
<td>1</td>
<td>4</td>
<td>3</td>
<td>0.5</td>
<td>0.21</td>
<td>170.356</td>
<td>171.066</td>
</tr>
</tbody>
</table>
To assess minimality, Fig. 4 shows, for each hypothesis, the number of events our approach would have preserved from the hard drive image. We compared these figures with those that an investigator would have examined from the data-set (No-FR). Our approach would have resulted in significantly fewer events to be examined for hypotheses $h_1$–$h_6$. For example, to support $h_2$ it would be necessary to identify the mail clients among the installed applications (133), inspect all the outboxes associated with the accounts registered with the mail clients (23 emails for the cfo’s outbox and no emails for the Administrator outbox) and identify the users’ last login (3). We cannot claim the same for $h_5$ the generated specification requires preserving users’ last logsins, mounted devices and file access operations, which are not present in the data-set.
### 7.2 Discussion
The paper aims to ensure relevant events are preserved that may serve as evidence, thus reducing the amount of data investigators would have to search through. This is what is evaluated. The paper is not concerned with reactive investigations nor aiding open-ended investigations. We make the assumption that the speculative hypotheses of an incident are given in advance. Therefore a forensic-ready system will be prepared to investigate only the incidents known a-priori. This the assumption on which forensic readiness guidelines for organisations are based on. Training experts (e.g., system/security administrator) to identify these is part of the business requirements for implementing forensic measures [45] and is outside the scope of our work. Furthermore, as the environment and the speculative hypotheses are expected to be known a priori, there is a risk that this knowledge can be used to thwart the forensic-ready system itself—an individual might adjust her behaviour to avoid preservation of events indicating her involvement in an offense. Thus, applications of our approach would require maintenance of confidentiality of the system specification.
The performance of our approach decreases when ‘richer’ positive and negative histories are used [5] (a saturation point is reached for 18 histories). This is caused by the increase in the EC model size when grounded. The time taken to synthesise a specification increases linearly with the length of the considered preservation histories. A saturation point is reached with histories having length 11. Note that the scalability results purely depend on the open-source prototype tool7 used to support the specification synthesis. This could be significantly improved by deploying learning techniques for context-dependent learning [32], and distributed reasoning [34].
To show that our formalisation could yield a practical solution, we provide a proof-of-concept implementation, putting aside usability issues. The definition of the model of the environment and the hypotheses of the university harrassment and the corporate exfiltration scenarios required 2 and 3.5 days of work, respectively, to one of the paper’s authors. We are developing a graphical interface that would mask the complexity of the formal specification and help practitioners represent potential incidents and how they may occur in the environment. Such graphical interface is based on the model-driven engineering principle to hide the complexity of the EC language used to represent the environment and hypotheses within a model. A model-based representation has the potential to ensure correctness of the models by-design and encourage re-usability of the environment and hypotheses among experts.
### 8 RELATED WORK
Existing research on forensic readiness has mainly focused on identifying high-level strategies which organisations can implement to be forensic-ready. For example, Elyas et al. [15] use focus groups to elicit required forensic readiness objectives (e.g., regulatory compliance, legal evidence management) and capabilities (organisational factors and forensic strategy). Reddy and Venter [42] present a forensic readiness management system taking into account event
---
7https://github.com/stefano-bragaglia/XHAIL
analysis capabilities, domain-specific information (e.g., policies procedures and training requirements), and costs (e.g., staff, infrastructure and training costs). However, none of these approaches has addressed the problem of how to implement forensic readiness in existing IT systems—inspite of the standardisation of forensic readiness processes (ISO/IEC 27043:2015) which prescribes the planning and implementation of pre-incident collection and analysis of evidence activities.
Shield et al. [48] propose performing continuous proactive evidence preservation. However, in large scale environments like cloud systems, monitoring all potential evidence is not a viable solution, as it might be cumbersome to analyse. Pasquale et al. [37] propose a more targeted approach, where evidence preservation activities aim to detect potential attack scenarios that can violate existing security policies. However, this approach is less selective as it prescribes to preserve any type of event within a history leading to an incident, independently of other events that have previously occurred or preserved. Existing work on data extraction for investigative purposes, such as E-Discovery [25], although supporting retrieval of data for an investigation, it does not provide a solution to engineer a forensic-ready system prescribing what data should be preserved depending on its relevance to future investigations.
With the growth of digital forensics as a discipline, interest in rigorous approaches has increased. For example, Carrier [11] provides guidelines about the types of hypotheses that should be formulated and the analysis to be performed to verify those hypotheses during a digital investigation. Others [1, 8, 27, 47] have focused on providing a unified representation of heterogeneous log events to automate event reconstruction. Similar to us, all these approaches distinguish between primitive events having a direct mapping to raw log events and complex level events, which can be determined by the occurrence of primitive ones. Formal techniques have also been used to represent and analyse the behaviour of the environment in order to identify the root causes that allowed an incident to occur [50] or possible incident scenarios [23]. Other work is specialised on identifying attackers’ traces (e.g., evidence and timestamps improperly manipulated by an attacker), from violations of invariant relationships between digital objects [49] or by applying model checking techniques on a set of events expressed in a multi-sorted algebra [7]. However, none of this work addresses the problem of how hypotheses can be expressed formally, how they relate to sequences of primitive and complex events supporting them and how to achieve preservation requirements.
The requirements engineering community [26, 33, 53, 55] proposed numerous techniques for modelling security and privacy requirements to enable the design of systems less vulnerable to potential attacks and privacy breaches. However, only preliminary attempts have been made towards engineering forensic-ready systems [6, 38] and investigating how forensic-readiness requirements are considered during systems development lifecycles [24]. Although preservation requirements can be considered as a specific type of monitoring requirements [17, 18, 44, 46], the nature of the specifications for forensic-ready systems is different in its scope (environment and hypotheses) and characteristics. These are aspects that have not been covered in previous work.
Recent studies in program analysis, such as [19, 57], have highlighted the importance of providing software developers with automated support in making logging decisions and difficulty in constructing specifications to guide logging behaviour. In [57] for instance, the authors present a method for learning what and when to log from past logs of software developers. This differs conceptually from what we present here since incident-related histories are domain specific, showing how particular hypotheses may be met within particular environments. For forensic-ready systems, justification of preservations need to be made explicit and in readable format, which is supported by the learning technique that we deploy. We believe however that our approach could help software developers in making informed decisions and insights on what logs to preserve to enhance forensic-readiness of systems. Closest to our work with in a forensic setting is that of [3, 30]. However [3] focuses on defining “ideal” logging preferences for databases that is independent of the incidents of concern and hence still poses a risk of inadequate logging. The work of [30] instead is limited to eliciting from natural language descriptions of software artefacts the set events (as verb-object pairs) and an empirical classification of such events to determine logging requirements.
9 CONCLUSION
This paper represents a first step towards a rigorous approach to developing forensic-ready systems. We defined a framework for formalising evidence preservation requirements of such systems. We use this to synthesise specifications that guarantee a minimal amount of data, constituting potentially relevant evidence to support given speculative hypotheses of incidents of concern, is preserved. We also provided a proof-of-concept implementation that has been evaluated on two incident scenarios. Our results demonstrate that our approach preserves relevant events and provides insight into whether existing software/devices have the necessary capabilities for preserving evidence. Moreover, the size of preserved data is smaller than what would have been examined during an investigation otherwise. Our approach does not propose replacing the role of engineers nor investigators. It also assumes that domain experts are involved in modelling the environment and selecting the relevant histories to be covered by the preservation specifications.
In the future, we plan to investigate how our approach may be adapted to dynamic situations at run-time in which environments and hypotheses may change over time. We are developing a graphical designer aimed to facilitate the practitioners’ task of designing the model of the environment and generation of hypotheses. Finally when generating a preservation specification, we will consider systematic approaches for synthesis when conflicts with other requirements, such as legal requirements, are present which may forbid preserving relevant data for privacy reasons.
ACKNOWLEDGEMENTS
This work is supported by ERC Advanced Grant no. 291652 (ASAP), SFI Grants 10/CE/I1855, 13/RC/2094 and 15/SIRG/3501, and the Imperial College Research Fellowship.
|
{"Source-Url": "http://oro.open.ac.uk/50894/7/50894.pdf", "len_cl100k_base": 15775, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 45637, "total-output-tokens": 16665, "length": "2e13", "weborganizer": {"__label__adult": 0.0010280609130859375, "__label__art_design": 0.0012731552124023438, "__label__crime_law": 0.0243377685546875, "__label__education_jobs": 0.01219940185546875, "__label__entertainment": 0.00025200843811035156, "__label__fashion_beauty": 0.0004639625549316406, "__label__finance_business": 0.0012331008911132812, "__label__food_dining": 0.0005955696105957031, "__label__games": 0.0023632049560546875, "__label__hardware": 0.0027370452880859375, "__label__health": 0.001277923583984375, "__label__history": 0.0011444091796875, "__label__home_hobbies": 0.0002944469451904297, "__label__industrial": 0.0012292861938476562, "__label__literature": 0.00119781494140625, "__label__politics": 0.0011072158813476562, "__label__religion": 0.0006761550903320312, "__label__science_tech": 0.189208984375, "__label__social_life": 0.00034809112548828125, "__label__software": 0.0382080078125, "__label__software_dev": 0.716796875, "__label__sports_fitness": 0.00048470497131347656, "__label__transportation": 0.0012006759643554688, "__label__travel": 0.0003333091735839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66984, 0.02655]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66984, 0.44093]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66984, 0.90085]], "google_gemma-3-12b-it_contains_pii": [[0, 813, false], [813, 6239, null], [6239, 12128, null], [12128, 19717, null], [19717, 25032, null], [25032, 32720, null], [32720, 39695, null], [39695, 46962, null], [46962, 53995, null], [53995, 60274, null], [60274, 66984, null], [66984, 66984, null]], "google_gemma-3-12b-it_is_public_document": [[0, 813, true], [813, 6239, null], [6239, 12128, null], [12128, 19717, null], [19717, 25032, null], [25032, 32720, null], [32720, 39695, null], [39695, 46962, null], [46962, 53995, null], [53995, 60274, null], [60274, 66984, null], [66984, 66984, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66984, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66984, null]], "pdf_page_numbers": [[0, 813, 1], [813, 6239, 2], [6239, 12128, 3], [12128, 19717, 4], [19717, 25032, 5], [25032, 32720, 6], [32720, 39695, 7], [39695, 46962, 8], [46962, 53995, 9], [53995, 60274, 10], [60274, 66984, 11], [66984, 66984, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66984, 0.06714]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
306663e8d6fd85466eaae6d0f9d61878d47020e1
|
<table>
<thead>
<tr>
<th>Title</th>
<th>Module-Wise Compilation for a Language with Type-Parameterization Mechanism (Mathematical Methods in Software Science and Engineering)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Author(s)</td>
<td>YUASA, TAIICHI</td>
</tr>
<tr>
<td>Citation</td>
<td>数理解析研究所講究録 (1979), 363: 1-40</td>
</tr>
<tr>
<td>Issue Date</td>
<td>1979-09</td>
</tr>
<tr>
<td>URL</td>
<td><a href="http://hdl.handle.net/2433/104571">http://hdl.handle.net/2433/104571</a></td>
</tr>
<tr>
<td>Type</td>
<td>Departmental Bulletin Paper</td>
</tr>
<tr>
<td>Textversion</td>
<td>publisher</td>
</tr>
</tbody>
</table>
Kyoto University
RIMS-280
Module-wise Compilation for a Language
with Type-parameterization Mechanism
By
Taiichi Yuasa
Research Institute for Mathematical Sciences
Kyoto University, Kyoto, Japan
May 1979
Language $\lambda$ is a specification and programming language designed to support hierarchical and modular program development. The notion of *synes*, which generalizes the so-called type-parameterization mechanism, causes some essential problems in the implementation of the language. These problems are discussed in detail and what is considered to be an efficient technique is introduced with which each type-parameterized module is separately compiled independent of the context in which it is used with actual type parameters.
1. Introduction
Structured programming with data and procedural abstraction mechanisms has been shown to highly increase program readability and software reliability. (See, e.g., CLU [2]) It has also been shown that the difficulties of specification and verification of programs, especially for large scale software systems, can be eased by introducing hierarchical structures into programs. Language $\lambda$ [4] has been proposed to support such program development with hierarchical and modular structures.
One of the characteristics of $\lambda$ is that it has a new data type concept --syles, which generalizes the so-called type-parameterization mechanism [5].
Sample programs with comments
We list some programs written in language $\lambda$ in order to locate the problem to be discussed in this paper. As complete description of the language is not within the scope of this paper, we only give preliminary remarks along with the programs. For a detailed explanation of the language or formal definitions for, say, the sype-sype relation, refer to [5].
interface type INT
+ fn ZERO: --------> 0 as 0
+ ONE: --------> 0 as 1
+ ADD: (0,0) ---> @ as 0+0
+ MULT:(@,@) ---> @ as @*@
+ REV: @ --------> @ as -@
GE: (0,0) ---> @ as 0<=@
end interface
specification type INT
was X,Y,Z:0
+ axiom 1: X+0=X
+ 2: X+Y=Y+X
+ 3: (X+Y)+Z=X+(Y+Z)
+ 4: X*Y=Y*X
+ 5: X*(Y+Z)=(X+Y)*Z
+ 6: X*(Y+Z)=X*Y+X*Z
+ 7: 1*X=X
+ 8: X+(-X)=0
+ 9: X<=Y V Y<=X
+ 10: (X<=Y & Y<=Z)===>X<=Z
+ 11: (X<=Y & Y<=X)===>X=Y
end specification
This is part of the type module INT which presents the type of integers. The interface part declares the primitive functions on INT with their domains and ranges. 0 denotes the type presented by the type module, which is INT in this case. By as a notational abbreviation is introduced for a function name. For instance, ADD(X,Y) can be written as X+Y. In the specification part, the basic axioms on INT are placed.
The following is a type module which presents RAT or the type of rational numbers.
interface type RAT
+
+ fn ZERO: --------> @ as 0
+ ONE: --------> @ as 1
+ ADD: (@,@) ---> @ as @+@
+ MULT:(@,@) ---> @ as @*@@
+ REV: @ --------> @ as -@
+ INV: @ --------> @ as /@
+
end interface
specification type RAT
var X,Y,Z:@
+
+ axiom 1: X+0=X
+ 2: X+Y=Y+X
+ 3: (X+Y)+Z=X+(Y+Z)
+ 4: X*Y=Y*X
+ 5: X*(Y*Z)=(X*Y)*Z
+ 6: X*(Y+Z)=X*Y+X*Z
+ 7: 1*X=X
+ 8: X+(-X)=0
+ 9: X*0 ==> X*/X=1
+
end specification
Notice that these two types have a common substructure: They have five primitive functions in common and their basic axioms from 1 to 8 are identical. Since this substructure may be contained in many other types, we extract and isolate the lines preceded by + to form the type module of RING.
interface type RING
in ZERO: ---------> 0 as 0
ONE: ----------> 0 as 1
ADD: (0,0) -----> 0 as 0+0
MULT:(0,0) -----> 0 as 0*0
REV: 0 ---------> 0 as -0
end interface
specification type RING
var X,Y,Z:0
axiom 1: X+0=X
2: X+Y=Y+X
3: (X+Y)+Z=X+(Y+Z)
4: X*Y=Y*X
5: X*(Y*Z)=(X*Y)*Z
6: X*(Y+Z)=X*Y+X*Z
7: 1*X=X
8: X+(-X)=0
end specification
We introduce a sype-type relation "$\leq". For a sype S and a type T, S$\leq$T holds if T contains S as its substructure. Thus RING$\leq$INT and RING$\leq$RAT in this case. To establish S$\leq$T, for each primitive function of S, say f, there must be defined a function of T of the same name (i.e. T#f). T#f is said to be the function corresponding to f of sype S.
In a similar way, we construct the sype module FIELD. Here we have FIELD < RAT.
**specification sype FIELD**
```plaintext
in ONE: --------> @ as 1
ZERO: --------> @ as 0
MULT: (@, @) ----> @ as @*@
ADD: (@, @) ----> @ as @+@
INV: @ --------> @ as /@
REV: @ --------> @ as -*
end interface
```
**specification sype FIELD**
```plaintext
war X, Y, Z: @
axiom 1: X+0=X
2: X+Y=Y+X
3: (X+Y)+Z=X+(Y+Z)
4: X*Y=Y*X
5: X*(Y*Z)=(X*Y)*Z
6: X*(Y+Z)=X*Y+X*Z
7: 1*X=X
8: X+(-X)=0
9: X*0 => X*(/X)=1
end specification
```
Now we define a type module POLY(P:RING) which presents the type of polynomials in one variable with any coefficient type T such that RING < T.
interface type POLY(P:RING)
in ZERO: --------> @ as 0
ONE: --------> @ as 1
ADD: (@, @) ----> @ as @+@
MULT: (@, @) ----> @ as @*@
REV: @ --------> @ as -@
COEF: (@, INT)--> @
DEG: @ --------> INT
end interface
specification type POLY(P:RING)
var X, Y, Z: @
axiom 1: X+0=X
2: X+Y=Y+X
3: (X+Y)+Z=X+(Y+Z)
4: X*Y=Y*X
5: X*(Y*Z)=(X*Y)*Z
6: X*(Y+Z)=X*Y+X*Z
7: 1*X=X
8: X+(-X)=0
end specification
realization type POLY(P:RING)
rep=ARRAY(P)
...
in ↓ADD(X, Y: rep) return (Z: rep)
var I: INT
...
Z[I] := P#ADD(X[I], Y[I]) .............(*)
...
end in
...
end realization
Fig.1.1 Type module POLY(P:RING)
An arbitrary type \( T \) such that \( \text{RING} \leq T \) can be used as the actual type parameter for \( \text{POLY} (P: \text{RING}) \). For instance, since \( \text{RING} \leq \text{INT} \), \( \text{POLY} (\text{INT}) \) is a type of polynomials whose coefficients are of type \( \text{INT} \). Thus \( P \), which we call a type parameter of type \( \text{RING} \), represents the indefinite (formal) type parameter and \( \text{POLY} (P: \text{RING}) \) is said to be a type-parameterized module. We call \( \text{POLY} (\text{INT}) \) a definite module instance of \( \text{POLY} (P: \text{RING}) \) since the actual type parameter \( \text{INT} \) is a definite type. On the other hand, \( \text{ARRAY} (P) \) in the realization part of \( \text{POLY} (P: \text{RING}) \) or \( \text{POLY} (\text{POLY} (P1)) \) which will appear later in Fig.1.2 are called indefinite module instances, for they contain formal type parameters \( P \) of \( \text{POLY} (P: \text{RING}) \) or \( P1 \) of \( \text{BIPOLY} (P1: \text{RING}) \).
The realization part gives an implementation of \( \text{POLY} (P: \text{RING}) \). Each object of type \( \text{POLY} (P: \text{RING}) \) is represented by \( \text{ARRAY} (P) \) or array of type \( P \). (e.g. \( \text{POLY} (\text{INT}) \) is represented by \( \text{ARRAY} (\text{INT}) \).) There is a rigorous distinction in the language between an abstract function (which is presented in the interface and the specification part) and its concrete function (which defines an implementation of the corresponding abstract function). To discriminate between these two kinds of functions, each concrete function has the name of its corresponding abstract function preceded by "\( \downarrow \)". In the figure above, the concrete function corresponding to (abstract) \( \text{ADD} \) has the name \( \downarrow \text{ADD} \).
The line marked "*" says that the \( i \)-th components of \( X \)
and $Y$ are 'added' and then the result replaces the $I$-th component of $Z$. Since the components of $X$ and $Y$ are of type $P$, the addition $+$ must be that of $P$ (i.e. $P\#ADD$). In this paper, those functions which are actually executed in runtime at the line "*" are said to be actual $ADD$'s for $P\#ADD$. If the actual type parameter is $INT$, the actual $ADD$ is the addition of integers, i.e. $INT\#ADD$.
From the interface and specification parts of $POLY(P\!:\!RING)$, we find another type-type relation $RING\lessdot POLY(P\!:\!RING)$. Remember that any type $T$ such that $RING\lessdot T$ can be used as the actual type parameter for $POLY(P\!:\!RING)$. This indicates that $POLY(POLY(P\!:\!RING))$ is permissible. Indeed, a type module $BIPOLYP1:RING)$ is represented by $POLY(POLY(P\!:\!RING))$, which is supposed to present the type of polynomials in two variables.
```
realization type BIPOLYP1:RING)
rep=POLY(POLY(P1))
:
fn \$ADD(X,Y:rep) return (Z:rep)
:
Z := rep#ADD(X,Y)
:
end fn
:
end realization
```
Fig.1.2 Type module $BIPOLYP1:RING)$
(Note that $POLY(POLY(P1))\#ADD$ is abbreviated as $rep\#ADD$.)
The relation "≤" is also defined between two sydes in language 1. For example, FIELD has RING as its substructure. Thus we can denote RING ≤ FIELD.
**realization procedure STP(P2:FIELD)**
```
par A,B,C:POLY(P2)
for P:
...
C := POLY(P2)#ADD(A,B)
...
end for
```
end realization
**Fig.1.3 Procedure module STP(P2:FIELD)**
In the body of STP(P2:FIELD)#F, above, POLY(P2)#ADD is called. That is, the actual type parameter which STP(P2:FIELD) receives in execution time is passed to POLY(P:P:RING). This is permissible since RING ≤ FIELD.
We have already used such a syde-syde relation in the realization part of POLY(P:RING). The built-in module ARRAY(P3:ANY) is a type-parameterized module which receives a type parameter P3 of syde ANY. Syde ANY is a built-in syde whose only primitive function is EQUAL or equality.
interface synp ANY
in EQUAL: (a,a) --> BOOL as a= a
end interface
specification synp ANY
var X,Y,U,V:@
axiom 1: X=X
2: (X=Y&U=V) ==> (X=U)=(Y=V)
end specification
In language λ, every sype or type is supposed to have its own EQUAL function. It can be defined explicitly in the module or else it is automatically defined by the system. Thus any sype or type S satisfies ANY< S. Since ANY RING holds, ARRAY(P) is permissible in the realization part of POLY(P:RING).
Although the type-parameterization mechanism itself is found in some other languages (e.g. CLU[2]), the expressive power of the notion of sypes brings some new difficulties into the implementation of the language.
This paper discusses these difficulties and shows how to overcome them. Section 2 presents the most straightforward way of compiling type-parameterized modules, called the "definite module-instance approach". Since the method has some deficiencies, we would prefer another method with which each type-parameterized module is separately compiled independent of the context in which it is used with actual type parameters. Then, in section 3, we discuss what kind of information is required for the actual type
parameters. Finally, in section 4, we explain how such information is constructed in execution time.
The problem
Let us return to the module POLY(PRING) (in Fig.1.1) and focus on the following problem: what should the compiler do in processing the realization part of POLY(PRING), especially for the function call of P#ADD (marked "*")? Also what kind of information should be sent to POLY(PRING) in execution time?
2. A solution -- definite module-instance approach
One possible solution is to do almost nothing with POLY(P:RING) itself until P is bound to some actual type parameter. When POLY(T) is used in other modules (i.e. when P is bound to an actual definite type instance T), the instance of the realization part of POLY(P:RING), with all occurrences of P replaced by T, is processed.
For example, when POLY(INT) is used, the line marked "**" is replaced by:
\[ Z[I] := \text{INTADD}(X[I], Y[I]) \]
Then the processor knows that INTADD is to be called.
If POLY(P:RING) is used with the actual type parameter RAT or the type of rational numbers, we have another definite module-instance POLY(RAT) with:
\[ Z[I] := \text{RATADD}(X[I], Y[I]) \]
Thus the processor actually regards these module-instances of POLY(P:RING) as two different type modules. Notice that the number of module-instances of a module is always finite because any module in language \( \lambda \) must be hierarchical i.e. no module can depend on itself. (Refer to [5]. The proof of finiteness is found in [8].) Therefore this approach is valid. Indeed, the experimental version of the \( \lambda \)-language compiler adopted this method.
This is not altogether a bad solution. Without the type-parameterization mechanism, one must define, say, two non-type-parameterized modules INTPOLY and RATPOLY separately, corresponding to the module instances POLY(INT)
and POLY(RAT), respectively. Here, INTPOLY and RATPOLY are thought to be of completely different modules. Thus the above method is nothing more than the conventional way of processing modules without the type-parameterization mechanism. In addition, the above method makes some optimizations possible. For example, as INT#ADD in POLY(INT) is nothing more than the usual integer addition, one can generate a single machine instruction instead of the actual function call of INT#ADD.
However, this method has the following deficiencies.
1. The bookkeeping of all instances of all type-parameterized modules is not a trivial task and is also time-consuming. (See, for example, in Fig.1.3 when STP(RAT) is defined, POLY(RAT) is to be automatically and implicitly defined.)
2. The compilation time tends to be long with repetitions of similar processing. Besides, a large amount of storage is required since each instance of a single type-parameterized module must be allocated separately.
3. Since a type-parameterized module is defined independently of the actual type parameters it receives, it is often convenient in program development to process it independently. For example, a type parameter independent object code of a type-parameterized module may make it possible to debug the module without sending actual type parameters.
Thus we would rather have module-wise processing where
each module is independently compiled and type parameter bindings are done dynamically. The following sections are devoted to showing how this can be done.
3. What is sent as actual type parameters?
Procedure tables for type-type relations
Given a procedure module AHO, in the realization part of which POLY(INT)#ADD is called,
```
realization procedure AHO
.
POLY(INT)#ADD(X,Y)
.
end realization
```
let us consider what kind of information AHO must send to (the compiled) POLY(P:RING) (in addition to the usual parameter information for X and Y).
The actual ADD for P#ADD in the realization of POLY(P:RING)#ADD is INT#ADD in this case. Thus the information must include the location of INT#ADD. Since AHO does not know the realization part of POLY(P:RING), AHO cannot determine which functions, corresponding to the functions primitive on type RING, are actually used in POLY(P:RING)#ADD. Therefore AHO must send a table which contains all actual functions corresponding to the primitive functions of type RING. We call such a table procedure table for RING<S,INT> and denote it as PT<RING,INT>.
In general, for each pair of type S and type T such that S<T and T is used as an actual type parameter of type S, PT<S,T> is constructed as follows. Let f1,...,fn be the primitive functions which are defined in that order in the interface part of type S and let T#f1,...,T#fn be the
functions of type $T$ corresponding to $f_1,\ldots,f_n$, respectively. $PT<s,T>$ is a block of $n$ entries and its $i$-th entry ($1\leq i\leq n$) contains the entry point of the function $T\#f_i$.
For instance, since the third function declared in type $RING$ is $ADD$ and fourth one is $MULT$, the third and fourth entries of $PT<RING,\text{INT}>$ contain the location of $\text{INT}\#ADD$ and $\text{INT}\#MULT$, respectively.
```plaintext
--------|--------|--------|--------|--------|--------
| ZERO | ONE | ADD | MULT | REV |
--------|--------|--------|--------|--------|--------
```
Fig.3.1 Procedure table $PT<RING,\text{INT}>$
At the time of compilation of $\text{POLY}(P:RING)$, the processor recognized that $ADD$ is the third function of type $RING$ by analyzing the interface part of $RING$. The object code is made so that the third entry of the procedure table is used in order to access the actual $ADD$. Then, when $\text{POLY(\text{INT})}\#ADD$ is called in AHO, the location of $PT<RING,\text{INT}>$ is sent to $\text{POLY}(P:RING)$.
Note that the order of primitive functions in the interface part of type $RING$ is important and needs to be fixed once $\text{POLY}(P:RING)$ is compiled.
Adaptor tables for sype-sype relations
Suppose we have a procedure module MAKO, in the
realization part of which STP(RAT)=F is called. (See Fig.1,3)
realization procedure MAKO
::
:: STP(RAT)=F
::
:: end realization
As explained before, PT<FIELD,RAT> is sent to STP(P2:FIELD) when STP(RAT)=F is called in executing (a body in the realization part of) MAKO.
---------:
----->|--------> RAT#ONE
----->|--------> RAT#ZERO
----->|--------> RAT#MULT
----->|--------> RAT#ADD
----->|--------> RAT#INV
----->|--------> RAT#REV
PT<FIELD,RAT>
In STP(P2:FIELD), however, this procedure table cannot be directly sent to POLY(P:RING) for the following reason: POLY(P:RING) expects a procedure table in which the functions are ordered according to the interface part of the sype RING, but the order of the primitive functions in RING does not necessarily coincide with that of the corresponding primitive functions of FIELD. Indeed, the location of RAT#ADD is found in the fourth entry in PT<FIELD,RAT> while ADD is the third function in the interface part of sype
RING.
Thus some adaptations must be made to use PT<FIELD,RAT> in POLY(P:RING). To this end, we introduce another kind of table called adaptor tables. For each pair of sypes $S$ and $S'$ such that $S \leq S'$, an adaptor table $AT<S,S'>$ is constructed. If the $i$-th primitive function of $S$ is presented as the $j$-th primitive function in the interface part of $S'$, then the $i$-th entry of $AT<S,S'>$ has the value of $j$. (Actually, however, $AT<S,S'>$ is not required if the order of the primitive functions in $S$ coincides with that of the corresponding functions of $S'$.)
POLY(P:RING) is supposed to receive a single list called the procedure description list (PDL) of the following form, where $S_1,...,S_n$ are distinct sypes and $T$ is a type such that $RING \leq S_1$, $S_1 \leq S_2$, ..., and $S_n \leq T$.
As a particular (but most common) case, $n$ may be zero. That is, the PDL is simply of the form:
```
----------
```
** This cell is used for the 'type parameter list' explained later.
In the above example, when MAKO is compiled, the processor constructs the following:
\[
\text{PT\{RING, T\}}
\]
In the compilation time of STP(P1:FIELD), an incomplete PDL shown below is prepared with AT\{RING,FIELD\}. (It is incomplete in the sense that the cell marked "*" must be linked to form a PDL in execution time.)
\[
\text{AT\{RING,FIELD\}}
\]
In execution time, this incomplete PDL is linked to the PDL
that STP(P1:FIELD) receives and is sent to POLY(P:RING) when POLY(P1) ADD is called in executing STP(P1:FIELD).

**Fig. 3.3**
We call such a dymanic linkage done in execution time **EDL linkage**. Note that, when executing a non type-parameterized module, no PDL linkage is required. For each type-parameterized module $M(P1:S1,\ldots,Pn:Sn)$, PDL linkages are required when and only when some $P_i$ ($1 \leq i \leq n$) is used as an actual type parameter to some formal type parameter of type $S_{i'}$ such that $S_{i'} \not< S_i$ and that $S_{i'}$ differs from $S_i$.
**Type parameter lists**
So far, we have considered only those cases where the actual type parameters to POLY(P:RING) are not type-parameterized. Now we explain how to deal with the cases where the actual type parameters to POLY(P:RING) are also type-parameterized.
Consider the case when POLY($M(T_1,\ldots,T_n)$)#ADD is called where; $M(P1:S1,\ldots,Pn:Sn)$ is a type-parameterized type
module with type parameters P1,...,Pn of types S1,...,Sn, respectively, and T1,...,Tn are actual type parameters to M(P1:S1,...,Pn:Sn). (T1,...,Tn may be themselves type-parameterized.) In this case, the PDL's for T1,...,Tn must be sent to POLY(P:RING). These PDL's are combined together in a list called a type parameter list (TPL) as shown below.
```
--------|--------> PDL for T1
|
--------|--------> PDL for T2
|
...
|
--------|--------> PDL for Tn
|
nil
```
This TPL is linked from the PDL for M(P1:S1,...,Pn:Sn).
```
--------|--------> nil
|
--------|--------> PDL for T1
|
--------|--------> PDL for T2
|
...
|
--------|--------> PDL for Tn
|
nil
```
PT<RING,M(P1:S1,...,Pn:Sn)>
When \( M(P_1:S_1, \ldots, P_n:S_n) \#ADD \) is called in executing \( POLY(P:RING) \), each PDL for \( T_i \) is retrieved through the PDL for \( M(P_1:S_1, \ldots, P_n:S_n) \) and sent to \( M(P_1:S_1, \ldots, P_n:S_n) \#ADD \).
The TPL's must be constructed in execution time if a certain module \( N \) which calls \( POLY(M(T_1, \ldots, T_n)) \#ADD \) is a type-parameterized module and \( T_i \) coincides with one of the formal type parameters of \( N \). For example, in the realization part of \( BIPOLY(P_1:RING) \) (in Fig.1.2), \( POLY(POLY(P_1)) \#ADD \) is called. In this case, the processor prepares an "incomplete" TPL in compilation time, which is linked from the PDL for \( POLY(P:RING) \).
When \( BIPOLY(P_1:RING) \#ADD \) is called with some actual type parameter, say \( T \), the cell marked "*" is linked to the PDL for \( T \).
Such a process to construct a complete TPL in execution time is called a TPL linkage. Note that if \( POLY(POLY(P_1)) \#ADD \) in the realization part of \( BIPOLY(P_1:RING) \) is replaced by \( POLY(P_1) \#ADD \), no TPL linkage is required since the actual type parameter that
BIPOLY(P1:RING) receives can be sent to POLY(P:RING)#ADD directly.
4. Runtime TPL/PDL linkages
The incomplete portions of TPL/PDL's (i.e., those which require dynamic linkage in execution time to construct information about actual type parameters) must be linked carefully so that the information already constructed is retained. When an incomplete TPL/PDL is linked in execution time, if the same TPL/PDL is already linked in order to construct information about actual type parameters of a currently active module instance, then this old information is violated. Such a situation may not occur so often in the actual programming. Theoretically, however, it is possible to create such a situation as shown in the following example:
Consider how BIPOLY(BIPOLY(INT))#ADD is executed (though this is quite a pathological case). Since POLY(POLY(P1))#ADD appears in the realization part of BIPOLY(P1:RING) (See Fig.2.1) and the actual type parameter to P1 is BIPOLY(INT), POLY(POLY(BIPOLY(INT)))#ADD will be called in executing BIPOLY(BIPOLY(INT))#ADD. Then the actual ADD for P#ADD in the realization part of POLY(P:RING) is POLY(BIPOLY(INT))#ADD, and so forth. The diagram below shows those functions which will be called in executing BIPOLY(BIPOLY(INT))#ADD, in order:
BIPOLY(BIPOLY(INT))#ADD
POLY(POLY(BIPOLY(INT)))#ADD
POLY(BIPOLY(INT))#ADD
BIPOLY(INT)#ADD
POLY(POLY(INT))#ADD
POLY(INT)#ADD
INT#ADD
Fig. 4.1 shows the state of the runtime stack and PDL's when BIPOLY(BIPOLY(INT))#ADD is called. As mentioned in the previous section, the cell marked "*" must be linked to the PDL for the actual type parameter that BIPOLY(P1:RING) receives (to the PDL for BIPOLY(INT), in this case). Then, in the course of executing BIPOLY(BIPOLY(INT))#ADD, BIPOLY(INT)#ADD will be called. This time the same cell "*" is to be linked to the PDL for INT.
Fig. 4.1 Just before BIPOLY(BIPOLY(INT))#ADD is called
Whenever such a violation of already constructed information occurs, the status of the TPL/PDL's must be restored when the function which causes the violation ends its execution. This situation arises for any type-parameterized module which requires dynamic TPL/PDL linkages in execution time and an instance of which is nested in another. Since the latter condition cannot be determined with the module-wise compilation, for any type module that requires dynamic TPL/PDL linkage, we must prepare for the situation above.
The well-known 'stack' mechanism is well adapted for the purpose. Before going to the actual mechanism adopted in the language processor, we present a virtual mechanism as an intermediate step.
Given a type-parameterized module $M(P_1:S_1,\ldots,P_n:S_n)$ which requires TPL/PDL linkages, suppose that the PDL's corresponding to $P_{\pi_1},\ldots,P_{\pi_m}$ must be linked, where $\{\pi_1,\ldots,\pi_m\}$ is a subset of $\{1,\ldots,n\}$. For each $j$ ($1 \leq j \leq m$), a stack $ST_j$ is prepared. When a function of $M(P_1:S_1,\ldots,P_n:S_n)$ is called from outside
$$M(P_1:S_1,\ldots,P_n:S_n),$$
1) the first node of each PDL corresponding to $P_{\pi_j}$ is pushed on $ST_j$, and
2) each cell in the incomplete TPL/PDL which must be linked to the actual type parameter corresponding to $P_{\pi_j}$ is set to point to the node just pushed on $ST_j$.
When the execution of a function of $M(P_1:S_1,\ldots,P_n:S_n)$
which is called from outside*** $M(P_1:S_1,\ldots,P_n:S_n)$ is completed,
3) $S_1,\ldots,S_m$ are popped, and
4) each cell in 2) is relinked to point to the node on the top of the stack.
Even if many module instances of a single module appear, the level of their nesting seems to stay low. Accordingly, these stacks need not be so large. This is the reason why we call these stacks small stacks. Thus the memory space is not wasted with this mechanism. Fig.4.2 shows two stages of the small stack for $BIPOLY(P_1:RING)$ in process of executing $BIPOLY(BIPOLY(INT))*ADD$.
---------------------
*** When a function of $M(P_1:S_1,\ldots,P_n:S_n)$ is called from inside of the module, the actual type parameters remain unchanged. Thus the process is unnecessary.
Fig. 4.2(a) When BIPOLY(BIPOLY(INT))#ADD is called
Fig. 4.2(b) When BIPOLY(INT)#ADD is called
There is room for improvement to cover the following inefficiencies.
1. An entire node must be pushed on the small stack.
2. Each incomplete TPL/PDL must be linked and relinked. This may be a problem when a module requires many incomplete TPL/PDL's.
The improvement can be realized with the indirect addressing mechanism of DEC-20 which is also found in many other computer hardwares.
With this mechanism, one need only push on the small stacks the pointers to the PDL the module receives, not the entire node pointed to by the pointer. Moreover, TPL linkages are done automatically.
The DEC-20 CPU calculates effective address as follows (if no indexing is used): each memory and instruction word contains an 18-bit address part and a 1-bit indirect flag. If an instruction word must reference memory, its indirect flag is tested. If it is off, the number in its address part is the effective address. If it is on, addressing is indirect, and the processor retrieves another address word from the location specified by that address part. This new word is processed in exactly the same manner. This process continues until some referenced location is found with indirect flag off: the number in its address part is the effective address.
Suppose, for instance, that there is a chain of pointers as shown in the figure below.
X1: X2: X3: X4: X5:
\[ \begin{array}{c}
1 \rightarrow 1 \rightarrow 0 \rightarrow 0 \rightarrow 1
\end{array} \]
Here each cell represents a word and the left hand side of each cell contains an indirect bit with 1 for on and 0 for off.
An indirect load instruction from X1 (i.e. indirect load instruction whose address part is X1) is executed as follows. The processor retrieves the content of X1. Since the indirect flag is on, it retrieves the content of X2. Again, the flag is on and the content of X3 is retrieved. The flag being off this time, X4 is the effective address. Thus the content of X4 (i.e. the pointer to X5) is loaded.
Note that the instruction yields the same result as above even when the chain of pointers are replaced as:
X1: X2: X5:
\[ \begin{array}{c}
0 \rightarrow 0 \rightarrow \ldots \rightarrow 0 \rightarrow 1
\end{array} \]
Now we are ready to explain the improved small stack mechanism.
As before, when M(P1:S1, ..., Pn:Sn) is compiled, small stacks ST1, ..., STM are prepared. In addition, stack pointers SP1, ..., SPM are prepared one for each STj. Each cell in the
incomplete TPL/PDL which must be linked to the actual type parameter corresponding to \( P_{ij} \) contains the pointer to \( SP_j \) and its indirect flag is set on. Each cell which contains the pointer to such a cell is also set the flag on. The flags of other cells are set off. (See Fig. 4.3)
When a function of \( M(P_1:S_1, \ldots, P_n:S_n) \) is called from outside of the module, each pointer to the PDL corresponding to \( P_{ij} \) is pushed on \( ST_j \). When the execution of a function of \( M(P_1:S_1, \ldots, P_n:S_n) \) which is called from outside of the module is completed, each \( ST_j \) is simply popped.
With this method, Fig. 4.2(a) is revised as follows.
Fig. 4.3 When \( BIPOLY(BIPOLY(INT))\#ADD \) is called
Now return to the problem of how, in executing POLY(P:RING)#ADD, the actual ADD is retrieved and actual type parameters are sent to the module that the actual ADD belongs.
The P#ADD call in POLY(P:RING) is done as follows.
Step 1. Search the PDL it receives to find a node whose first cell is nil.
Step 2. Load the third word of the cell to a register, say, LX.
Step 3. If LX is not nil then load a pointer indirectly from LX and push it on the runtime stack. Else go to Step 5.
Step 4. Load the second word of the node pointed to by LX and go to Step 3.
Step 5. Get the actual ADD from the procedure table pointed to by the node found in Step 1 and call it.
(Step 1 and 5 are simplified for brevity.)
To see the soundness of the above algorithm, we trace the steps in two cases; one case when POLY(P:RING) receives a PDL generated in compile time and another case when the TPL is linked through the small stack in execution time.
Case 1. Suppose POLY(P:RING) receives a PDL of the form:
step 1. The search stops immediately since the first cell of node X1 is nil.
step 2. LX contains:
\[ \text{nil} \rightarrow 01 \rightarrow X2 \]
step 3. Since the indirect flag is off in LX, the effective address is X2. The first cell of X2 is pushed on the runtime stack.
step 4. The second word of X2 is loaded to LX.
step 5. POLY(P:RING)#ADD is called.
Case 2. Consider the execution of BIPOLY(INT)#ADD. When
POLY(POLY(INT))#ADD is called from BIPOLY(INT)#ADD,
POLY(P:RING) receives a PDL, whose TPL is linked through the
small stack for BIPOLY(P1:RING).
The trace is same as in case 1 except for the first Step 2
and Step 3.
step 2. LX contains:
<table>
<thead>
<tr>
<th>1</th>
</tr>
</thead>
</table>
step 3. Since the indirect flag is on in LX, the effective
address is X4. (See the example presented in the
explanation of the indirect addressing mechanism.) The first
word of X4 (i.e., the pointer to X3) is pushed on the runtime stack.
When POLY(P:RING)#ADD is called at Step 5, the state of the runtime stack is:
Thus in both cases POLY(P:RING)#ADD is called with valid PDL set on the runtime stack.
In Step 1 of the algorithm, indirect instruction will be used as in Step 3 since the PDL may also be linked through small stacks. We leave it to the reader to detail Step 1 and 5 of the algorithm.
Note. Treatment of assignment and equality
As mentioned in section 1, every type or type in \( \lambda \) is supposed to have its own EQUAL function. The truth value of the equality between two objects of a type \( T \) is determined by the EQUAL of \( T \) (i.e. \( T\text{\#EQUAL} \)). If \( T\text{\#EQUAL} \) is not defined in the realization part of the type module \( T \), then the system automatically generates codes for \( T\text{\#EQUAL} \) so that the EQUAL of the type by which \( T \) is represented is called.
The assignment (ASSIGN) is a data-type independent program construct in language \( \lambda \) and is never given implementation in the realization part of any type module. In the implementation of the language, however, it is convenient to consider that each type \( T \) has its own assignment among the basic operations of \( T \) and we conveniently denote it as "\( T\text{\#ASSIGN} \)" as if ASSIGN were a primitive function of \( T \). Thus, for example, the assignment statement
\[
X := Y
\]
(where \( X \) and \( Y \) are variables of type \( T \)) is considered as:
\[
\text{\( T\text{\#ASSIGN}(X,Y) \)}
\]
In this way, EQUAL's and ASSIGN's can be treated in the same manner as (other) primitive functions.
In executing \( \text{POLY}(P\text{\#RING}) \), if \( P\text{\#EQUAL} \) or \( P\text{\#ASSIGN} \) is required, the actual EQUAL or the actual ASSIGN must also be retrieved from the procedure description list that \( \text{POLY}(P\text{\#RING}) \) receives. Thus we extend each procedure table \( PT<5,T> \) so that its "\(-1\)"-th and "\(0\)"-th entries contain
T#ASSIGN and T#EQUAL, respectively. For example, pT<RING, INT> (in Fig.3.1) is extended as follows:
```
|--------|-------| INT#ASSIGN
|--------|-------| INT#EQUAL
|--------|-------| INT#ZERO
|--------|-------| INT#ONE
|--------|-------| INT#ADD
|--------|-------| INT#MULT
|--------|-------| INT#REV
```
When the actual EQUAL or ASSIGN is retrieved, since they are always contained in the fixed entries in any procedure table, the intermediate adaptor tables in the PDL need not be used. Therefore the PDL is simply traversed to find the node which contains the procedure table. This indicates that the retrieval of the actual EQUAL or ASSIGN is faster than that of other primitive functions.
Remember that EQUAL is the only primitive function of sype ANY. For the same reason as above, for any sype or type S, we need no AT<ANY, S> at all. For example, when a function of ARRAY(P3: ANY) is called in POLY(P: RING), the PDL that POLY(P: RING) receives can be sent to ARRAY(P3: ANY) as it is, without TPL linkage. Actually, most of the sype-sype or sype-type relations are of the form ANY< S. So this consideration may greatly increase the efficiency.
ACKNOWLEDGEMENTS
The author wishes to express his appreciation to Professor Keiji Nakajima for patiently supervising this research.
REFERENCES
3. Honda, M., Nakajima, R.: Interactive theorem proving on Hierarchically and Modularly Structured Set of Very Many Axioms. 6th Int. Joint Conf. on Artificial Intelligence 79
7. Nakajima, R.: Sypes - partial types - for program structuring and first order system i logic. Research Report No.22, Institute of Informatics, Univ. of Oslo 1977
|
{"Source-Url": "https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/104571/1/0363-1.pdf", "len_cl100k_base": 10052, "olmocr-version": "0.1.50", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 43610, "total-output-tokens": 12010, "length": "2e13", "weborganizer": {"__label__adult": 0.0003237724304199219, "__label__art_design": 0.00029850006103515625, "__label__crime_law": 0.0002627372741699219, "__label__education_jobs": 0.0005545616149902344, "__label__entertainment": 5.751848220825195e-05, "__label__fashion_beauty": 0.00013637542724609375, "__label__finance_business": 0.00017189979553222656, "__label__food_dining": 0.00038313865661621094, "__label__games": 0.0004870891571044922, "__label__hardware": 0.0011043548583984375, "__label__health": 0.0004353523254394531, "__label__history": 0.0002567768096923828, "__label__home_hobbies": 8.559226989746094e-05, "__label__industrial": 0.0004208087921142578, "__label__literature": 0.00022935867309570312, "__label__politics": 0.00024259090423583984, "__label__religion": 0.0005235671997070312, "__label__science_tech": 0.019317626953125, "__label__social_life": 7.134675979614258e-05, "__label__software": 0.003818511962890625, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.0003075599670410156, "__label__transportation": 0.0005650520324707031, "__label__travel": 0.00020253658294677737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36168, 0.02804]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36168, 0.71565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36168, 0.84352]], "google_gemma-3-12b-it_contains_pii": [[0, 517, false], [517, 708, null], [708, 1241, null], [1241, 2307, null], [2307, 3175, null], [3175, 3961, null], [3961, 4666, null], [4666, 5321, null], [5321, 5982, null], [5982, 7917, null], [7917, 9070, null], [9070, 9898, null], [9898, 11105, null], [11105, 11523, null], [11523, 12953, null], [12953, 14344, null], [14344, 14500, null], [14500, 15737, null], [15737, 17058, null], [17058, 18102, null], [18102, 19114, null], [19114, 19532, null], [19532, 20514, null], [20514, 21321, null], [21321, 22457, null], [22457, 22524, null], [22524, 23802, null], [23802, 24358, null], [24358, 25804, null], [25804, 26568, null], [26568, 26663, null], [26663, 27994, null], [27994, 29105, null], [29105, 29844, null], [29844, 30841, null], [30841, 31202, null], [31202, 31681, null], [31681, 32115, null], [32115, 33727, null], [33727, 34881, null], [34881, 36168, null]], "google_gemma-3-12b-it_is_public_document": [[0, 517, true], [517, 708, null], [708, 1241, null], [1241, 2307, null], [2307, 3175, null], [3175, 3961, null], [3961, 4666, null], [4666, 5321, null], [5321, 5982, null], [5982, 7917, null], [7917, 9070, null], [9070, 9898, null], [9898, 11105, null], [11105, 11523, null], [11523, 12953, null], [12953, 14344, null], [14344, 14500, null], [14500, 15737, null], [15737, 17058, null], [17058, 18102, null], [18102, 19114, null], [19114, 19532, null], [19532, 20514, null], [20514, 21321, null], [21321, 22457, null], [22457, 22524, null], [22524, 23802, null], [23802, 24358, null], [24358, 25804, null], [25804, 26568, null], [26568, 26663, null], [26663, 27994, null], [27994, 29105, null], [29105, 29844, null], [29844, 30841, null], [30841, 31202, null], [31202, 31681, null], [31681, 32115, null], [32115, 33727, null], [33727, 34881, null], [34881, 36168, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36168, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36168, null]], "pdf_page_numbers": [[0, 517, 1], [517, 708, 2], [708, 1241, 3], [1241, 2307, 4], [2307, 3175, 5], [3175, 3961, 6], [3961, 4666, 7], [4666, 5321, 8], [5321, 5982, 9], [5982, 7917, 10], [7917, 9070, 11], [9070, 9898, 12], [9898, 11105, 13], [11105, 11523, 14], [11523, 12953, 15], [12953, 14344, 16], [14344, 14500, 17], [14500, 15737, 18], [15737, 17058, 19], [17058, 18102, 20], [18102, 19114, 21], [19114, 19532, 22], [19532, 20514, 23], [20514, 21321, 24], [21321, 22457, 25], [22457, 22524, 26], [22524, 23802, 27], [23802, 24358, 28], [24358, 25804, 29], [25804, 26568, 30], [26568, 26663, 31], [26663, 27994, 32], [27994, 29105, 33], [29105, 29844, 34], [29844, 30841, 35], [30841, 31202, 36], [31202, 31681, 37], [31681, 32115, 38], [32115, 33727, 39], [33727, 34881, 40], [34881, 36168, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36168, 0.04684]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
7d5060748a63c32a250d63349162ed821b1dc434
|
Order additional copies as directed on the Software Information page at the back of this document.
digital equipment corporation • maynard, massachusetts
First Printing, October, 1974
The information in this document is subject to change without notice and should not be construed as a commitment by Digital Equipment Corporation. Digital Equipment Corporation assumes no responsibility for any errors that may appear in this manual.
The software described in this document is furnished to the purchaser under a license for use on a single computer system and can be copied (with inclusion of DIGITAL's copyright notice) only for use in such system, except as may otherwise be provided in writing by DIGITAL.
Digital Equipment Corporation assumes no responsibility for the use or reliability of its software on equipment that is not supplied by DIGITAL.
Copyright © 1974 by Digital Equipment Corporation
The HOW TO OBTAIN SOFTWARE INFORMATION page, located at the back of this document, explains the various services available to DIGITAL software users.
The postage prepaid READER'S COMMENTS form on the last page of this document requests the user's critical evaluation to assist us in preparing future documentation.
The following are trademarks of Digital Equipment Corporation:
- CDP
- COMPUTER LAB
- COMSYST
- COMTEX
- DDT
- DEC
- DECCOMM
- DECTAPE
- DIBOL
- DIGITAL
- DNC
- EDGRIN
- EDUSYSTEM
- FLIP CHIP
- FOCAL
- GLC-8
- IDAC
- IDACS
- INDAC
- KA10
- LAB-8
- LAB-8/e
- LAB-K
- OMNIBUS
- OS/8
- PDF
- PHA
- PS/8
- QUICKPOINT
- RAD-8
- RSTS
- RSX
- RTM
- RT-11
- SABR
- TYPESET 8
- UNIBUS
This document describes the hardware and software requirements, system installation procedures, and the operational aspects of sending data to and receiving data from a remote computer system which supports the 2780 Data Transmission Terminal or a PDP-11 computer with the 2780 software. The description applies only to RSTS/E (Resource Sharing Time Sharing/Extended) systems capable of supporting the RSTS/2780 software. Features directly related to the 2780 environment itself are described in the document 2780 Remote Computer Systems Installation Notes.
In this document, messages printed by the system or by a program are underlined to differentiate them from responses typed by the user. Shown below are titles and order numbers of related documents the user can obtain from the system manager at the local installation.
<table>
<thead>
<tr>
<th>Document</th>
<th>Order Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>RSTS-11 System User's Guide</td>
<td>DEC-11-ORSUA-C-D</td>
</tr>
<tr>
<td>2780 Remote Computer Systems</td>
<td>DEC-11-CCDNA-A-D</td>
</tr>
<tr>
<td>Installation Notes</td>
<td></td>
</tr>
</tbody>
</table>
The 2780 Remote Computer Installation Notes document is included as part of the RSTS/2780 software to give the RSTS/E system manager and system programmer guidelines to the requirements that an IBM operating system imposes on his data processing operation. The document presents the information the system manager requires to prepare his IBM installation to support the RSTS/2780 software. Because the operation of various IBM operating systems varies, the user must consult the applicable IBM documentation and the system manager of the host IBM installation.
CONTENTS
Chapter 1 INTRODUCTION TO RSTS/2780 SOFTWARE 1-1
1.1 HARDWARE REQUIREMENTS 1-1
1.2 SOFTWARE REQUIREMENTS 1-1
1.3 OVERVIEW OF RJ2780 OPERATIONS 1-2
Chapter 2 TRANSMITTING AND RECEIVING DATA 2-1
2.1 RUNNING AND TERMINATING RJ2780 2-1
2.2 ESTABLISHING A COMMUNICATIONS LINK 2-4
2.3 INTERACTIVE OPERATIONS 2-5
2.4 SPOOLED OPERATIONS 2-6
2.5 SENDING BINARY FILES 2-7
2.6 ERROR MESSAGES 2-8
Appendix A BUILD PROCEDURES A-1
Appendix B ESC CHARACTER TRANSLATION B-1
INDEX INDEX-1
TABLES
2-1 RJ2780 Operations 2-2
2-2 Responses to NORMAL OR TRANSPARENCY MODE Query 2-2
2-3 Responses to the DEFAULT OUTPUT FILE Query 2-3
2-4 RJ2780 Program Commands 2-5
2-5 RJ2780 Error Messages 2-8
B-1 ESC Character Translation B-1
CHAPTER 1
INTRODUCTION TO RSTS/2780 SOFTWARE
The RSTS/2780 software package enables a RSTS/E system to act as a powerful remote job entry terminal. Using RSTS/2780, RSTS/E users can queue data and job control files for transmission to a host IBM 360 or 370 system or to another PDP-11 Remote Computer System.
The RSTS/2780 software consists of two software components: a driver module for the synchronous line interface and a control program for managing the flow of data to and from the driver module. The driver module and the control program are included in the RSTS/E system at system generation time. Thereafter, the user's sole interface with the remote job device is through the control program RJ2780.
The RJ2780 program is coded in BASIC-PLUS and resides in the system library account on the system disk. It provides both interactive and spooled methods of operation. Employed interactively, a single user establishes the data link; he specifies directly the files to be transmitted to the host system and the destination of the received data. In spooled operation, RJ2780 runs as a spooling program and transmits files as they are queued by any RSTS/E user running the standard queueing system program. In spooled operation, received data is appended to one output file or stored in dynamically created separate disk files.
1.1 HARDWARE REQUIREMENTS
To run RSTS/2780 software, the RSTS/E system requires the KG11A communications arithmetic unit and either a DP11 or DU11 serial synchronous line interface unit. The KG11A unit performs error checking for serially transmitted data and is used with either the DP11 or the DU11 device to block and deblock data transmitted over a serial synchronous line. The system software supports only one unit of either the DP11 or the DU11 devices although multiple units can be connected to the computer.
The RSTS/2780 driver module requires 6K words of memory and four big buffers (256-words each). The control program requires a user job area of 16K words to run.
1.2 SOFTWARE REQUIREMENTS
To successfully install the remote job driver (RJ) module and the RJ2780 control program on the RSTS/E system, the media on which the RSTS/2780 software is delivered to the customer must be the same as that which is employed to generate the RSTS/E system. For the RJ2780 program to operate, the user must include the Record I/O software option when he configures his system. Specific system installation information is given in the RSTS/E System Manager's Guide and guidelines for installation are given in Appendix A of this document.
Spooling of jobs on RSTS/E requires the interaction of the QUEMAN and QUE system programs and the specific spooling program. Thus, for RJ2780 to conduct spooled operations, the QUEMAN program must be running on the system. For more information on spooled operations on RSTS/E, see Sections 6.10 and 4.1.3 of the RSTS/E System Manager's Guide.
1.3 OVERVIEW OF RJ2780 OPERATIONS
The user employs the 2780 capability by simply running the RJ2780 program from the system library. By typing responses to questions the program prints, the user indicates the nature of the site he wants to communicate with, the type of transmission he wishes the program to perform, and the name(s) of the file(s) to use as output for received data.
When communicating with an IBM operating system, RJ2780 transmits records of 80 characters and receives records as large as 132 characters. When communicating with another PDP-11 computer having the 2780 capability, RJ2780 transmits and receives records up to 132 characters long.
The user can condition the RJ2780 program to transmit data with or without translation. If the user chooses translation, the program converts the data to EBCDIC format before transmitting it. When receiving, RJ2780 translates the data to ASCII format. During translation, all ESC character sequences are converted to give the proper character and line spacing information. If the user does not wish translation, he merely designates the file as binary and the program transmits it without translation. ¹
Employed in an interactive fashion, the RJ2780 program accepts requests one at a time from the user terminal. To transmit data, the user types a request specifying the remote job device (RJ:) as output and the disk file to transmit as input. The program accesses the file and prepares the data for transmission. The program automatically directs data received to a default disk file unless the user overrides the default by specifying another file as output and the RJ: device as input.
Employed in a spooled fashion, the RJ2780 program runs as other RSTS/E spooling programs run. Any user on the system can create requests for RJ2780 by running the QUE program and specifying RJ: as the spooled device. The queue management system program QUEMAN running on the system sends, one by one, the pending requests to the RJ2780 program. RJ2780 executes requests and directs data received to the conventionally established disk file(s).
In either interactive or spooled operations, the user can specify that RJ2780 automatically queue all received data for printing. As a result, for each file received, RJ2780 creates a request which the QUE program assigns to a line printer spooling program SPOOL. Thus, if the RJ2780 program operates in a spooled fashion for both input and output requests, no user intervention is necessary between creating the request for the RJ device and removing the printed output from the line printer.
¹The requirements of the host system determine whether or not a file may be transmitted without translation (in binary mode). Binary mode must be used when transmitting between PDP-11 systems.
CHAPTER 2
TRANSMITTING AND RECEIVING DATA
To transmit data to and receive data from a remote computer system, the user must run the RJ2780 system program, initiate a communications link with the remote station, and type commands to control the transmission. The user transmits data interactively as described in Section 2.3 or conducts spooled operations specified by queued requests as described in Section 2.4. The RJ2780 system program must be run from a terminal logged into the system under a privileged account.
2.1 RUNNING AND TERMINATING RJ2780
To run the RJ2780 program stored in the system library, type the following system command.
```
RUN $RJ2780
```
The program runs and prints two lines. The first line contains the program and system names and version numbers. The second line is a query requesting the type of operation to perform.
For example,
```
RJ2780 V01-01 RSTS V05B-24
GENERAL, 2780, OR RSTS-TO-RSTS?
```
To condition the program properly, the user must type one of the responses described in Table 2-1.
To communicate with most IBM operating systems, type 2780 in response to the GENERAL, 2780, OR RSTS-TO-RSTS query. The program subsequently prints the NORMAL OR TRANSPARENCY MODE query, the responses to which are described in Table 2-2. Unless the user wants to transmit a binary file or unless the remote computer system requires transparency mode,1 the NORMAL response applies in all cases.
To communicate with another PDP-11 computer having the 2780 capability, type GENERAL in response to the GENERAL, 2780, OR RSTS-TO-RSTS query. RJ2780 subsequently prints the NORMAL OR TRANSPARENCY MODE query. Unless the user wants to transmit binary files, he can type NORMAL in answer to the MODE query.
1Some IBM operating systems require that information be processed in transparency mode. The user must determine the requirements of the IBM system he is communicating with and run the RJ2780 program accordingly.
### Table 2-1
#### RJ2780 Operations
<table>
<thead>
<tr>
<th>Response</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>2780</td>
<td>Conditions program to transmit 80-character card image records to an IBM system having the 2780 capability and to receive records up to 132 characters long.</td>
</tr>
<tr>
<td>GENERAL</td>
<td>Same as 2780 but RJ2780 can transmit records up to 132 characters long. Used to communicate with other PDP-11 computers having the 2780 capability.</td>
</tr>
<tr>
<td>RSTS-TO-RSTS</td>
<td>Conditions program to transmit and receive 132 character records while communicating with another RSTS/E system running the RJ2780 program. RJ2780 automatically operates in TRANSPARENCY mode and transmits all files as binary files. ESC character sequences are not processed.</td>
</tr>
<tr>
<td>RETURN key or any response not beginning with 2 or R.</td>
<td>Same as typing GENERAL.</td>
</tr>
</tbody>
</table>
### Table 2-2
#### Responses to NORMAL OR TRANSPARENCY MODE Query
<table>
<thead>
<tr>
<th>Response</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>NORMAL</td>
<td>Treat all data as ASCII characters and translate them to EBCDIC format for transmission. Upon receiving data, assume EBCDIC format and translate characters and ESC character sequences to ASCII format before storing them.</td>
</tr>
<tr>
<td>TRANSPARENCY</td>
<td>Similar to NORMAL but transmit characters in EBCDIC transparency mode and allow user to designate a file as binary.</td>
</tr>
<tr>
<td>Type RETURN key or any response not beginning with T</td>
<td>Same as typing NORMAL.</td>
</tr>
</tbody>
</table>
1Regardless of the operation, the program receives records to a maximum of 132 characters per record.
To communicate with another RSTS/E system having the 2780 capability, type RSTS-TO-RSTS in response to the GENERAL, 2780, OR RSTS-TO-RSTS query. The program subsequently omits the NORMAL OR TRANSPARENCY MODE query and sets the mode to transparency and treats all files as if they were binary.
After the program sets the transmission mode, it prints the DEFAULT OUTPUT FILE query. The response to this query determines how RJ2780 stores jobs received from the remote station. The valid responses are listed in Table 2-3. A filename specification with only an asterisk in the extension field creates a unique file for each job received. The extensions of the specified filename begin with 001 and increase as required to 999. For example, a response of FILE.* causes RJ2780 to create a file named FILE.001 for the first job received, FILE.002 for the second job, and so on as required. A standard file specification given in response to the DEFAULT OUTPUT FILE query creates only a single file. For each job received following the first job, RJ2780 appends the data to that single file.
Table 2-3
Responses to the DEFAULT OUTPUT FILE Query
<table>
<thead>
<tr>
<th>Response</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>filnam.*</td>
<td>Automatically creates a disk file for each job received. RJ2780 creates the files dynamically with the name specified and extensions 001 through 999.</td>
</tr>
<tr>
<td>filnam.ext</td>
<td>Program creates only one output disk file. For each job received, RJ2780 appends data to the file.</td>
</tr>
<tr>
<td>filnam.*/Q:n</td>
<td>Same as filnam.* response except that RJ2780 automatically creates requests to queue file(s) for output on line printer unit n. If only /Q follows the file name specification, RJ2780 creates the request for any available line printer unit.</td>
</tr>
</tbody>
</table>
To cause RJ2780 to automatically queue the output file for subsequent printing, the user can type /Q:n after the filename specification. If only /Q is typed after the file specification, the QUEMAN system program uses any line printer unit available. If a number for n is given, QUEMAN queues a request for that line printer unit number. The QUEMAN program automatically deletes each file after printing it.2
After a response to the DEFAULT OUTPUT FILE query is entered, the program opens the remote job device (RJ:).
1It is strongly recommended that the line printer not be used as the output file. If for any reason the device goes off line, the program possibly terminates and subsequently stops receiving the file.
2Refer to Section 6.10 of the RSTS/E System Manager's Guide for more information concerning QUEMAN.
The program indicates its readiness to accept commands by printing the asterisk (*) character. The following sample dialog shows the entire procedure.
```
GENERAL, 2780 or RSTS-TO-RSTS? 2780
NORMAL OR TRANSPARENCY MODE? NORMAL
DEFAULT OUTPUT FILE? GMB.*Q:1
```
When RJ2780 prints the first * character, the user must put the data set in the READY state by making the connection with the remote computer. (With a leased line, the connection is made automatically.) In response to the asterisk, the user can thereafter type a command as described in Section 2.2.
To terminate the program, the user types the CTRL/Z combination in response to the asterisk character printed at the terminal.
```
*!Z
READY
```
The READY message indicates that RJ2780 is terminated and control is at BASIC-PLUS command level.
### 2.2 ESTABLISHING A COMMUNICATIONS LINK
Disks containing files to be transmitted or to be used for logging operations or for job output must be mounted, ready and on line. If at any time during transmission a device leaves the READY state, RJ2780 prints the message DEVICE HUNG OR WRITE-LOCKED and omits the job requesting that device.
Once the user establishes the link with the remote station, he can type RJ2780 commands described in Section 2.3 to control data transmissions. For example, to transmit the commands to log the user onto an IBM remote computer system, the user might type a command to send a file containing the proper IBM commands.
```
*RJ:=SIGNON.RJE
```
The RJ2780 program reads and processes the command in about ten seconds. If no errors occur, the program prints the asterisk after processing the command. The program subsequently prints messages telling the user that it is sending the file and that it has completed sending the file. It is up to the user to determine the correct procedures and passwords required by the remote station.
The RJ2780 program receives responses from the remote station and writes them to the disk file the user specified in the DEFAULT OUTPUT FILE query.
The following sample dialog shows the process.
```
*01-MAY-74 08:36 PM SENDING: SIGNON.RJE
01-MAY-74 08:36 PM 1 RECORD SENT
```
The program does not print the asterisk again until after the user types another request. The user can type commands ahead; the program processes them in the order in which they are entered.
2.3 INTERACTIVE OPERATIONS
A user conducts interactive operations with the remote computer system by typing commands as shown in Table 2-4. For example, to transmit a data file DEC.RJE, the user types the following command.
*RJ: = DEC.RJE
The program opens the file DEC.RJE under the current user's account on the system disk. If the program cannot access the file for any reason, it prints a message detailing the type of error, the file-name and the text of the specific RSTS error. For example, if DEC.RJE does not exist, the program prints the following text.
SEND ERROR: DEC.RJE: CAN'T FIND FILE OR ACCOUNT AT LINE 2480
When the program begins transmission, it prints two messages informing the user that transmission is progressing. When transmission is completed, RJ2780 prints a message indicating how many records it sent. The following sample printout shows the process.
01-MAY-74 08:39 PM SENDING: DEC.RJE
01-MAY-74 08:39 PM 2 RECORDS SENT
Following the printout, the program does not print the asterisk.
Table 2-4
RJ2780 Program Commands
<table>
<thead>
<tr>
<th>Format</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>RJ:=d1,d2,...,dn</td>
<td>Transmits the data stored in the disk files denoted by the standard RSTS file specifications d1, d2 through dn. RJ2780 transmits one END OF FILE indicator following the last file in the request.</td>
</tr>
<tr>
<td>d=RJ:</td>
<td>Conditions the RJ2780 program to write the next received job to the file denoted by the standard RSTS file specification d. The * character is not allowed in the file specification and the /Q syntax is not allowed.</td>
</tr>
<tr>
<td>SPOOL</td>
<td>Puts the RJ2780 program in spooled mode as described in Section 2.4.</td>
</tr>
<tr>
<td>CTRL/Z combination</td>
<td>Terminates the RJ2780 program and disconnects the data set.</td>
</tr>
</tbody>
</table>
Alternatively, the user can temporarily override the default output file by typing a command as shown below.
```
ABC=RJ:
01-MAY-74 8:39 PM RECEIVING FILE: ABC
01-MAY-74 8:39 PM 2 RECORDS RECEIVED
```
The program conditions the system to write the next data received from the remote station to the file ABC on the system disk. This specification takes precedence for one and only one file and causes the program not to use the default output file specified at starting time. If file ABC does not exist, the program creates it. After receiving the file RJ2780 directs further data to the default output file.
2.4 SPOOLED OPERATIONS
To conduct spooled operations, the user types the SPOOL command. A requirement to use SPOOL is that the QUEMAN system program be running on the system. After recognizing the SPOOL command, the program prints queries as shown below.
```
*SPOOL
LOG FILE NAME? KB9:
SIGNOFF FILE NAME? SIGNOF.RJE
DETACHING
```
The query LOG FILE NAME allows the user to specify a disk file or a keyboard device which the program uses to print a log of messages. A keyboard device provides a convenient and readily available means of monitoring the progress of spooled operation. The logging device must not be the current terminal. Therefore, the user must not type the designator of the current keyboard. The logging device must not be in use by another job or be logged into the system. RJ2780 next prints the query SIGNOFF FILE NAME. The user must supply the specification of the file containing the appropriate commands to notify the remote station that the local site desires to terminate operations. RJ2780 sends this file if, while in spooled mode, it receives a shut-down command from QUEMAN. This procedure prevents loss of data by unconditional termination and is used by the QUEMAN program when the SHUTUP system program runs to stop time sharing operations. After the user types the specification of the signoff file, the program prints the message DETACHING and detaches itself from the current terminal.
Spooling operations are executed based upon the jobs queued in the system file QUEUE.SYS. Any user on the system can enter job requests to be queued by running the QUE system program described in Section 4.11 of the RSTS-11 System User's Guide. The RJ2780 program, when in spool mode, sends a message to the QUEMAN program to begin spooling. As a result, QUEMAN extracts jobs from the QUEUE.SYS file and passes them to the RJ2780 program for processing.
To terminate spooling operations, attach the RJ2780 job to a terminal. For example,
```
READY
ATTACH 5
ATTACHING TO JOB 5
```
In the example, the ATTACH 5 command attaches the RJ2780 job to the terminal. (The example assumes the terminal is logged into the system under the same account RJ2780 uses.) Upon completing any transmission in progress, the program discontinues accepting input from the QUEMAN program and reverts to terminal interaction. An asterisk printed at the terminal indicates that the program is ready to accept commands. RJ2780 continues to receive data from the remote job device. If the QUEMAN program is sending the RJ2780 job a request to process and if, at the same time, the user attaches the job to the terminal, RJ2780 does not process the request. However, the request remains in the QUEUE.SYS file to be processed again when RJ2780 enters SPOOL mode at a later time.
2.5 SENDING BINARY FILES
Translating data is a vital part of communicating with a remote computer system. Character data is stored in RSTS/E in ASCII format. Most computer systems which handle 2780 processing expect character data to be in EBCDIC format. RJ2780 software therefore translates received EBCDIC characters into ASCII to pass to RSTS/E and similarly translates transmitted ASCII characters into EBCDIC format to send to the remote computer system.
Omitting the translation of characters is sometimes required to prevent destruction of data. For example, if a numeric data file (as opposed to a character data file) is to be transmitted, translating it possibly destroys the data. Because of the need to transmit data without translation, RJ2780 recognizes an option which indicates a file is to be transmitted as a binary file.
To transmit a file as binary, the user must run RJ2780 in transparency mode and supply the /B option with the file specification. For example, if the user is running interactively, he types a command similar to the following.
*RJ:=ABC.DAT/B
*
The program subsequently transmits the data without translation. Running in transparency mode ensures that a control character bit pattern in the data does not terminate the transmission at an indeterminate place. If RJ2780 is running as a spooling program and in transparency mode, the user indicates the binary file by supplying the /B option in the QUE command. For example,
QUE RJ:STAT=STAT.RJE,ABC.DAT/B
READY
QUE runs and creates the job STAT to transmit to the remote computer system. RJ2780 performs the required translation for the file STAT.RJE but suppresses translation for the file ABC.DAT. For more information on queuing files, refer to Section 4.11.2 of the RSTS-11 System User's Guide.
The receiving system must be conditioned to receive a binary file. The /B option must be specified for the output file except when running in RSTS-TO-RSTS mode at both ends.
2.6 ERROR MESSAGES
Errors occurring during 2780 operations are reported by the RJ2780 program and described in Table 2-5. The program prints the error messages on either the current terminal or on the logging device. Certain errors should never occur on a system and are denoted by the abbreviation SPR following their descriptions. If such an error occurs, the user should report it and the conditions under which it occurred. The procedure for filing a Software Problem Report (SPR) is in the SOFTWARE PROBLEMS section of the HOW TO OBTAIN SOFTWARE INFORMATION page at the back of this manual. Errors marked FATAL cause RJ2780 to return to BASIC-PLUS command level during interactive operations or to kill itself during spooled operations. In either case, the user must correct the problem and rerun the request.
Table 2-5
RJ2780 Error Messages
<table>
<thead>
<tr>
<th>Error Text</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>BYTE COUNT TOO LARGE IN XMT</td>
<td>Byte count of record to be transmitted is greater than 80 (for 2780 type operation) or greater than 132 (for RSTS-TO-RSTS or GENERAL type operations.)</td>
</tr>
<tr>
<td>LINE DISCONNECT RECEIVED</td>
<td>Disconnect sequence received from the remote computer system. User must redial. (FATAL)</td>
</tr>
<tr>
<td>2780 HANDLER FAILURE</td>
<td>The 2780 device handler failed. (FATAL)</td>
</tr>
<tr>
<td>$ BYTE COUNT</td>
<td>The byte count of a record to be transmitted is zero. This error indicates a failure of the RSTS/E monitor (SPR).</td>
</tr>
<tr>
<td>DATA-SET-READY TIME-OUT</td>
<td>Once a transmission starts, the RJ2780 program waits 30 seconds if the data set READY condition is not present. If, after 30 seconds, the program does not detect the data set READY condition, the program terminates. The user must run RJ2780 again and redial the remote system. (FATAL)</td>
</tr>
<tr>
<td>GET/PUT INTERLOCK ERROR</td>
<td>The RJ2780 program attempted an invalid input or output operation on the RJ device. (SPR) (FATAL)</td>
</tr>
</tbody>
</table>
(Continued on next page)
Table 2-5 (Cont.)
RJ2780 Error Messages
<table>
<thead>
<tr>
<th>Error Text</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>2780 BUFFER OVERRUN</td>
<td>RJ2780 employs 7 buffers in memory to transmit and receive records. Somehow this mechanism has failed. (SPR) (FATAL)</td>
</tr>
<tr>
<td>NAK/TIMEOUT ON LINE</td>
<td>The RJ2780 attempts to elicit a valid response from the remote computer system by retrying an operation eight times. If, after eight attempts, only invalid responses are received, the program terminates. Ensure that neither the communications line nor the remote computer system is generating errors. (FATAL)</td>
</tr>
<tr>
<td>REMOTE SYSTEM NOT RESPONDING</td>
<td>Remote system has gone off the air. (FATAL)</td>
</tr>
<tr>
<td>QUEMAN NOT RUNNING--CAN'T RUN</td>
<td>To conduct spooling operation, the QUEMAN system program must be running. (FATAL)</td>
</tr>
<tr>
<td>REMOTE SYSTEM BROKE FILE</td>
<td>The remote computer system transmitted an ETB and EOT character sequence within a message when RJ2780 expected an ETX and EOT character sequence. This condition indicates an unexpected end of the transmission. The program continues processing.</td>
</tr>
<tr>
<td>REMOTE SYSTEM DEMANDED LINE (BID OVERRIDE)</td>
<td>An attempt to transmit to a remote computer is overridden by its attempt to transmit to the local site.</td>
</tr>
<tr>
<td>REMOTE SYSTEM DEMANDED LINE (RVI)</td>
<td>While sending a file, RJ2780 detects a high priority message from the remote computer. Program enters receive state. After receiving the message, program resumes sending the interrupted file.</td>
</tr>
</tbody>
</table>
(Continued on next page)
Table 2-5 (Cont.)
RJ2780 Error Messages
<table>
<thead>
<tr>
<th>Error Text</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>FATAL SYSTEM I/O FAILURE</td>
<td>RJ2780 requires four big buffers to run. If heavy DECTape usage leaves less than four big buffers available, RJ2780 prints this message and terminates. User must inhibit usage of DECTape drives temporarily and rerun the program so that RJ2780 can claim the four buffers it needs.</td>
</tr>
</tbody>
</table>
To incorporate the RSTS/2780 software in RSTS/E, the user must include the driver module and control program code when he generates the RSTS/E system. The RSTS/2780 software must reside on the same medium the user employs to generate the system. For example, if RSTS/F software resides on magtape, the RSTS/2780 software must also be on magtape.
To include the driver module software on the system, perform the system generation procedure as described in Chapter 2 of the RSTS/E System Manager's Guide and do each of the following steps.
(a) answer YES to the question concerning the 2780,
(b) designate the proper device in response to the 2780 INTERFACE question, and
(c) answer YES to the RECORD I/O question.
Four big buffers are automatically included in the system when RSTS/2780 software is included in the configuration. Record I/O software is automatically included and the RECORD I/O question is not printed if the multiple terminal feature is included.
As a result of answering these questions, the system generation batch stream prints messages telling the user to mount the tape or disk containing the software for the driver. The batch stream subsequently links the driver module into the RSTS/E system.
To include the RJ2780 control program on the system, the user must follow the guidelines and procedures for using the RJ2780.CTL file when building the system library files as described in Chapter 4 of the RSTS/E System Manager's Guide. As a result of following the proper procedures, the BUILD program stores the compiled form of the RJ2780 program in the system library and changes its protection code to <232>.
APPENDIX B
ESC CHARACTER TRANSLATION
The RJ2780 program translates certain special formatting characters appearing at the beginning of records received from a remote computer system. When the program is performing a 2780 or GENERAL operation, it treats ESC character sequences found in the first two bytes of the received record as described in Table B-1. ESC sequences are treated as pure data when RJ2780 operates in RSTS-TO-RSTS (binary) mode.
Table B-1
ESC Character Translation
<table>
<thead>
<tr>
<th>Sequence</th>
<th>2780 Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>ESC 4</td>
<td>Removes ESC sequence, suppresses tab expansion and suppresses all forms control. RJ2780 automatically inserts a CR and LF character sequence after each record.</td>
</tr>
<tr>
<td>ESC /</td>
<td>Removes sequence and inserts one LF character after the record.</td>
</tr>
<tr>
<td>ESC S</td>
<td>Removes ESC sequence and inserts two LF characters after the record.</td>
</tr>
<tr>
<td>ESC T</td>
<td>Removes sequence and inserts three LF after the record.</td>
</tr>
<tr>
<td>ESC A</td>
<td>Removes ESC sequence and inserts FF and CR character sequence after the record.</td>
</tr>
<tr>
<td>ESC CHR$(9)</td>
<td>Uses this record to define tab positions for printer horizontal format control. See the description of this special feature in the component description of the IBM 2780 Data Transmission Terminal. (The CHR$(9) character is a horizontal tab.)</td>
</tr>
<tr>
<td>ESC other</td>
<td>No translation. Passed to the output file as is.</td>
</tr>
</tbody>
</table>
INDEX
ASCII characters, 2-2
ASCII format, 1-2, 2-7
BUILD program, A-1
Communication links, 2-4
Control program, 1-1
Control program code, A-1
CTRL/Z combination, 2-4
Default output file, 2-3
DETACHING, message, 2-6
Driver module, 1-1
Driver module code, A-1
Driver module size, 1-1
Driver module software, A-1
EBCDIC format, 1-2, 2-2, 2-7
80-character card, 2-2
Error messages, 2-8
ESC sequences, B-1
IBM operating systems, 2-1
Installation, 1-1
Interactive operations, 2-5
Interface DP11, 1-1
Interface DUNI, 1-1
Job STAT, 2-7
KG11A unit, 1-1
Message DETACHING, 2-6
Messages, underlined, iii
NORMAL mode, 2-2
Operation RJ2780 interactive, 1-1
PDP-11 computers, 2-1
Program commands, 2-5
Protection code, A-1
QUE system program, 1-2
Records, length, 1-2
Remote job device, 1-1
RJ2780, interactive operation, 1-1
RJ2780 operations, 2-2
RJ2780 program, running, 2-1
RJ2780, spooled operation, 1-1
Running RJ2780 program, 2-1
Sending binary files, 2-7
Size, driver module, 1-1
Size, user job, 1-1
SPOOL command, 2-6
Spooled operation, RJ2780, 1-1
Spooled operations, 2-6
STAT, job, 2-7
Storing RJ2780 jobs, 2-3
System program QUEMAN, 1-2
Terminating the program, 2-4
Translation, 1-2
Transmission modes, 2-1, 2-2
TRANSPARENCY mode, 2-2
Underlined messages, iii
User job size, 1-1
HOW TO OBTAIN SOFTWARE INFORMATION
SOFTWARE NEWSLETTERS, MAILING LIST
The Software Communications Group, located at corporate headquarters in Maynard, publishes newsletters and Software Performance Summaries (SPS) for the various Digital products. Newsletters are published monthly, and contain announcements of new and revised software, programming notes, software problems and solutions, and documentation corrections. Software Performance Summaries are a collection of existing problems and solutions for a given software system, and are published periodically. For information on the distribution of these documents and how to get on the software newsletter mailing list, write to:
Software Communications
P. O. Box F
Maynard, Massachusetts 01754
SOFTWARE PROBLEMS
Questions or problems relating to Digital's software should be reported to a Software Support Specialist. A specialist is located in each Digital Sales Office in the United States. In Europe, software problem reporting centers are in the following cities:
Reading, England Milan, Italy
Paris, France Solna, Sweden
The Hague, Holland Geneva, Switzerland
Tel Aviv, Israel Munich, West Germany
Software Problem Report (SPR) forms are available from the specialists or from the Software Distribution Centers cited below.
PROGRAMS AND MANUALS
Software and manuals should be ordered by title and order number. In the United States, send orders to the nearest distribution center.
Digital Equipment Corporation
Software Distribution Center
146 Main Street
Maynard, Massachusetts 01754
Digital Equipment Corporation
Software Distribution Center
1400 Terra Bella
Mountain View, California 94043
Outside of the United States, orders should be directed to the nearest Digital Field Sales Office or representative.
USERS SOCIETY
DECUS, Digital Equipment Computer Users Society, maintains a user exchange center for user-written programs and technical application information. A catalog of existing programs is available. The society publishes a periodical, DECUSCOPE, and holds technical seminars in the United States, Canada, Europe, and Australia. For information on the society and membership application forms, write to:
DECUS
Digital Equipment Corporation
146 Main Street
Maynard, Massachusetts 01754
DECUS EUROPE
Digital Equipment Corporation
International (Europe)
PO Box 340
1211 Geneva 26
Switzerland
NOTE: This form is for document comments only. Problems with software should be reported on a Software Problem Report (SPR) form (see the HOW TO OBTAIN SOFTWARE INFORMATION page).
Did you find errors in this manual? If so, specify by page.
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Did you find this manual understandable, usable, and well-organized? Please make suggestions for improvement.
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Is there sufficient documentation on associated system programs required for use of the software described in this manual? If not, what material is missing and where should it be placed?
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Please indicate the type of user/reader that you most nearly represent.
☐ Assembly language programmer
☐ Higher-level language programmer
☐ Occasional programmer (experienced)
☐ User with little programming experience
☐ Student programmer
☐ Non-programmer interested in computer concepts and capabilities
Name ___________________________________________ Date __________________________
Organization _________________________________________________________________
Street ________________________________________________________________
City _______________________ State ______ Zip Code __________________________
or
Country
If you do not require a written reply, please check here. ☐
|
{"Source-Url": "http://www.mirrorservice.org/sites/www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/DEC-11-ORJEA-A-D%20RSTS-E%202780%20User's%20Guide.pdf", "len_cl100k_base": 9098, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 66329, "total-output-tokens": 10253, "length": "2e13", "weborganizer": {"__label__adult": 0.0004305839538574219, "__label__art_design": 0.0003020763397216797, "__label__crime_law": 0.00044655799865722656, "__label__education_jobs": 0.0013828277587890625, "__label__entertainment": 0.00018656253814697263, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.0009312629699707032, "__label__food_dining": 0.0002963542938232422, "__label__games": 0.002292633056640625, "__label__hardware": 0.015228271484375, "__label__health": 0.00014078617095947266, "__label__history": 0.00018417835235595703, "__label__home_hobbies": 0.0001474618911743164, "__label__industrial": 0.0007996559143066406, "__label__literature": 0.00023794174194335935, "__label__politics": 0.0001308917999267578, "__label__religion": 0.0004317760467529297, "__label__science_tech": 0.008575439453125, "__label__social_life": 5.984306335449219e-05, "__label__software": 0.32177734375, "__label__software_dev": 0.64501953125, "__label__sports_fitness": 0.0001829862594604492, "__label__transportation": 0.0003254413604736328, "__label__travel": 0.00015437602996826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39390, 0.05564]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39390, 0.1983]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39390, 0.87477]], "google_gemma-3-12b-it_contains_pii": [[0, 155, false], [155, 1604, null], [1604, 3307, null], [3307, 3307, null], [3307, 4087, null], [4087, 4087, null], [4087, 6670, null], [6670, 9808, null], [9808, 11756, null], [11756, 13527, null], [13527, 16219, null], [16219, 18570, null], [18570, 20436, null], [20436, 23056, null], [23056, 25796, null], [25796, 28015, null], [28015, 30075, null], [30075, 30651, null], [30651, 32291, null], [32291, 32291, null], [32291, 33685, null], [33685, 33685, null], [33685, 34979, null], [34979, 34979, null], [34979, 37387, null], [37387, 37387, null], [37387, 39390, null], [39390, 39390, null], [39390, 39390, null], [39390, 39390, null]], "google_gemma-3-12b-it_is_public_document": [[0, 155, true], [155, 1604, null], [1604, 3307, null], [3307, 3307, null], [3307, 4087, null], [4087, 4087, null], [4087, 6670, null], [6670, 9808, null], [9808, 11756, null], [11756, 13527, null], [13527, 16219, null], [16219, 18570, null], [18570, 20436, null], [20436, 23056, null], [23056, 25796, null], [25796, 28015, null], [28015, 30075, null], [30075, 30651, null], [30651, 32291, null], [32291, 32291, null], [32291, 33685, null], [33685, 33685, null], [33685, 34979, null], [34979, 34979, null], [34979, 37387, null], [37387, 37387, null], [37387, 39390, null], [39390, 39390, null], [39390, 39390, null], [39390, 39390, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39390, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39390, null]], "pdf_page_numbers": [[0, 155, 1], [155, 1604, 2], [1604, 3307, 3], [3307, 3307, 4], [3307, 4087, 5], [4087, 4087, 6], [4087, 6670, 7], [6670, 9808, 8], [9808, 11756, 9], [11756, 13527, 10], [13527, 16219, 11], [16219, 18570, 12], [18570, 20436, 13], [20436, 23056, 14], [23056, 25796, 15], [25796, 28015, 16], [28015, 30075, 17], [30075, 30651, 18], [30651, 32291, 19], [32291, 32291, 20], [32291, 33685, 21], [33685, 33685, 22], [33685, 34979, 23], [34979, 34979, 24], [34979, 37387, 25], [37387, 37387, 26], [37387, 39390, 27], [39390, 39390, 28], [39390, 39390, 29], [39390, 39390, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39390, 0.14394]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
80b346e0d92954c4bac8ab45709221881f21d355
|
Goal-driven behavior in S-CLAIM
Gerard Simons & Alex Garella
# Table of Contents
Goal-driven behavior in S-CLAIM ........................................................................................................ 1
Table of Contents ................................................................................................................................. 2
1. Introduction ........................................................................................................................................ 3
2. State of the art in Multi-Agent System programming ................................................................. 4
3. Concept for goal-driven behavior in S-CLAIM ............................................................................... 7
3.1 Goal life-cycles of different goal types ...................................................................................... 7
3.2 Goal delegation, cooperative and competitive behavior ............................................................ 10
3.3 Goal Priorities ............................................................................................................................. 10
3.4 Representation of the mental state ............................................................................................ 11
3.5 Inference of sub-goals and plans ............................................................................................... 12
3.6 Finite State Machine Generic Goal ............................................................................................ 13
3.7 Illustrative Example .................................................................................................................... 15
3.8 Challenges to Plan Generation .................................................................................................... 17
3.9 Plan Generation .......................................................................................................................... 20
3.10 Illustrative GraphPlan example ................................................................................................ 23
3.11 Goal Representation in S-CLAIM ............................................................................................. 25
4 Software design & Implementation .................................................................................................... 27
4.1 Software Design .......................................................................................................................... 27
4.2 Implementation ........................................................................................................................... 28
4.3 Testing ......................................................................................................................................... 34
5. Conclusion & Future Work ................................................................................................................. 35
6. Bibliography ...................................................................................................................................... 36
1. Introduction
Multi Agent Systems
Multi-agent systems (MAS) are used to describe several agents that interact with each other (positively, but also negatively) within an environment. These agents are able to solve problems difficult for centralized systems to solve by interacting with each other. These agents have cognitive and communication capabilities and the capability to achieve goals of various forms.
CLAIM
The CLAIM[1] (Computational Language for Autonomous, Intelligent and Mobile Agents) project constitutes a framework and higher programming language in which agents can be modeled. CLAIM aims to reduce the gap in the concept and design stages by improving the tools to design agents, freeing designers from low-level details. Goals, capabilities, processes and messages can be created within this language. The framework also contains functionality for mobility. Agents are able to migrate from one computational site (e.g. a computer or mobile device) to another. CLAIM agents can be modeled as agents in a tree structure hierarchy. The CLAIM language contains primitives to modify the hierarchy by means of migration of the agents. The use of goals combined with the mobility of these agents sets CLAIM apart from other agent programming languages.
S-CLAIM
S-CLAIM is an extension of CLAIM. S-CLAIM is now built on top of the JADE platform[2] instead of the SyMPA platform[3], the reasons for the switch to JADE are its compatibility with android devices, extensive documentation and it has been used worldwide for more than 10 years. Additionally, S-CLAIM should be even more agent-oriented to the point that it should be possible to be used by people not specialized in programming. The structure of S-CLAIM should be clearer and more understandable thanks to the more high-level approach compared to CLAIM. Goals should be defined in a declarative manner like in languages such as the GOAL programming language while also retaining and expanding on its mobile functionality. The use of declarative goals should make the agents more proficient in cooperative behavior. The idea of the reactive and proactive paradigm, where proactive behavior is an agent’s desire to fulfill its goals and reactive behavior is its response upon messages from other agents or percepts from the environment should be explicitly built into S-CLAIM
Introducing goal-driven behavior in S-Claim
Our aim will be mainly to create a proof of concept for goal-driven behavior by implementing goal declaration and functionality in the S-CLAIM language. Things to consider are the way goal-driven behavior is currently used in BDI (believe desire intentions) systems and how it could be implemented within S-CLAIM framework. How do the difference in concepts like belief, desires and intentions in BDI systems and knowledge, goals and capabilities affect implementation of goal-oriented behavior? How can an agent’s goals be reached using its finite set of capabilities? As goals are high level structures, how should these structures be translated to the agent’s basic capabilities? When and how should agents work together to achieve common goals? What kind of mechanism (e.g. Automata and Petri nets) can we design to control the status of the goals?
2. State of the art in Multi-Agent System programming
Declarative goal types
A declarative goal is a state in the environment which the agent wants to reach. The use of declarative goals has been introduced to simplify the design and implementation of Multi-Agent Systems. The use of declarative goals has the following advantages:
1. Well suited for a natural description of Multi-Agent Systems
2. Preservation of a high level of abstraction throughout the development process
3. The concept of goals is similar to the way humans think and act.
These points simplify the transition from the design to the implementation phase. This simplification leads to clearer and thus less error-prone code.
To be able to use goals in a declarative way we must define a way to represent them. When representing goals we can distinguish between three types of goals: Achievement goals, Perform goals and Maintain goals. This particular set of goal types has been deduced from literature and implemented systems. [4][5]
Achievement Goals
Achievement goals are used to specify a state in the world that an agent wants to achieve. When the agent adopts an achievement goal it adopts a plan to achieve the desired target. The desired target is specified in the target condition of the achievement goal. When the target has been reached, the achievement goal has succeeded and it will be dropped as a consequence. An achievement goal will only fail when there is no way for the goal to be achieved, in this case the achievement goal will be aborted and dropped. The failure condition indicates when the achievement goal will fail and thus abort.
An example of an achieve goal is AchieveCleanup, specifying that the agent has to cleanup at a certain spot which is stored in its knowledge, with target condition: not( wasteAt( 20,16 ) ) and failure condition: not( reachable( 20,16 ) ), where ( 20,16 ) is an ( X, Y ) coordinate. In this case the agent wants to clean up at position 20,16. The AchieveCleanup goal will be achieved once the condition not( wasteAt( 20,16 ) ) will become true. Meaning that there is no more waste at position 20,16. The failure condition states that the achieve goal will fail if the position of the waste is not reachable by the agent.
Perform Goals
A perform goal specifies activities that have to be performed by the agent. The perform goal will succeed when a corresponding plan has been generated and is performed. If no plan is found or generated the perform goal will fail. Thus the perform goal’s outcome is solely dependent on the fact if an activity has been performed. The perform goal contains a redo option which allows the perform goal to execute a plan iteratively.
An example of a perform goal is when an agent has the goal to patrol a certain area. The goal could be named PatrolArea and it would consist of moving through a plane in a certain manner under certain conditions. The agent could move up en down the plane between 22.00 in the evening and 6.00 in the morning.
Maintain Goals
A maintain goal has the purpose of maintaining a certain state in the environment. The agent will monitor this desired state and will re-establish it if it is violated. The maintain goal has a maintain condition and an optional target condition. Once the maintain condition has been violated the agent will select a plan to re-establish the desired state. In some cases the plan has to be executed until a certain target has been reached which is specified in the target condition. This is the case when an agent has to maintain a charged battery. In this case we can specify a maintain condition: ‘charge state > 20%’, and a target condition: ‘charge state = 100%’. In this example the agent will start charging the battery when the charge state is lower than 21% and will keep charging until the charge state reaches the target condition of 100%.
Plan Generation
Goals are defined to be able to create, or infer, a series of actions, called a plan. Generating a feasible plan in order to achieve the goals is paramount to creating an effective goal-driven agent.
Plan generation in BDI systems is the act of creating a sequence of actions (a plan) from an agent's given goals and beliefs. Plans are necessary for agents to reach their goals. Therefore a suitable plan generation in any MAS systems is of the greatest importance. Because of the real-time and variable characteristics of the environment the plans have to be kept up to date by means of a mechanism that ensures that the preconditions of the generated plans are consistent with the environment. If the preconditions do not hold any more part of the plan has to be recalculated.
Backward chaining
Backward chaining will be used in S-CLAIM to achieve proactive behavior. It allows the agent to work from a goal in its goal queue back to a capability it can perform. This ‘chain’ back to a simple capability constitutes a sequence of actions (or capabilities) which becomes the agent’s plan to achieve it’s goal.
Forward chaining
In addition to being proactive, an agent may exhibit reactive behavior by reacting to messages it receives from other agents. The agent can respond to a message it receives by ways of forward chaining, which creates a set of actions; a plan. From the actions described in the message the agent will chain forward to see if by doing so it can (partially) achieve its own goals, which may lead to its cooperation in performing the actions asked for by the other agent.
A combination of backward and forward chaining can be used to achieve proactive behavior.
Task delegation
An important aspect is task delegation. First of all we must consider the attitudes of an agent towards other agents. We can distinguish between two types of attitudes: cooperative and competitive attitudes.
When agents have a cooperative attitude towards each other and have overlapping goals, the agent may decide to receive tasks from another agent, so that the agents work together to reach a common goal. Conversely, if agents have competitive attitudes towards each other, the agent has to decide for himself if it is in his best interest to cooperate with other agents to reach a goal.
The use of declarative goals should make task delegation easier by means of goal delegation. For example agent A has a certain goal: TurnOffLight. If however it is more economical for agent B to achieve this particular goal because he is closer to the light switch, then the goal should be delegated to agent B.
**Agent Mobility**
The agents in CLAIM are structured in a hierarchy with a tree structure. Agents are ordered as such, an agent can have several agents as children. When an agent moves to another agent or device, the sub-agents of this agent move with it. The root of each tree is the computational device platform. Removing, adding and other primitives are used to change the structure of these hierarchies are based on the ambient calculus. Using mobility effectively poses another important possibility for increasing the MAS' overall efficacy.
**BDI vs. CLAIM**
BDI agents have mental states composed of Beliefs (the state of the environment as far as the agent may know), Desires (the states the agent wants to reach) and Intentions (a collection of actions the agent intends to do to fulfill its desires).
CLAIM agents consist of knowledge, goals and (reactive and proactive) capabilities. Knowledge in CLAIM is similar the beliefs in BDI agents and the goals similar to the desires. BDI does not usually introduce explicit goals. Introducing goals may overcome some “traditional” limitations of the pure-BDI approach. Examples of limitations of the pure BDI approach are its lack of explicit communication goals and explicit goals.[4]
BDI systems are also not explicitly mobile, introducing mobility can provide additional advantages to a multi-agent system by adding mobile computing capabilities which increase the efficiency of a multi-agent system.
The BDI approach is currently the most widely used and established approach to multi-agent system programming. Claim has been envisioned to tackle the limitations of the pure BDI approach by adding functionalities as mobility and explicit goals.
3. Concept for goal-driven behavior in S-CLAIM
Our aim is to implement goals and the goal-driven behavior in CLAIM by using the various goal types. By looking at related articles such as Goal Types by van Riemsdijk[5] and Goal representation for BDI Agents by Braubach[4] and comparing these to the functionality of CLAIM[3] we can create a proof of concept which describes how the goal-driven behavior should be implemented.
The goal-life cycles and goal types are based on the papers by Braubach[4] and van Riemsdijk[5]. These notions have been extended by us by introducing goal priorities and a hierarchical graph structure to represent the goal base.
3.1 Goal life-cycles of different goal types
Generic goal type
A goal can be seen as being in three distinct states. These states are “New”, “Adopted” and “Finished”.
When a goal is created - goals can be either created by the agent programmer, or come from requestGoal messages - the goal is in the “New” stage in which it stays until an agent chooses to adopt the goal upon which the goal’s state changes to “Adopted”. When an agent adopts a goal it can always choose to drop the goal. When adopted, the goal is added to goal structure, where it is stored - possibly with other goals.
Within “Adopted” several sub-states exist, these can be defined as “Option”, “Active” and “Suspended”. When the goal enters the “Adopted” state, it enters the “Option” state. This state indicates that the goal is an option for the Agent to pursue when the circumstances allow it.
It is only when an agent does this that the goal changes it’s state to “Active”. When in this state, the goal will be actively pursued by the agent who adopted this goal.
When a goal’s context becomes invalid (e.g. the goal is not achievable) the state changes from either “Active” or “Option”, to “Suspended”.
A goal may return from the option state after begin suspended, to return it to the “Option” state, after which it may once again be deliberated for further processing.
Whenever a goal meets its drop condition (e.g. the goal has been achieved) the state changes to finished and the life-cycle ends.
Figure 1 from GOAL representation for BDI agents by Braubach[4] illustrates the life cycle of a goal as we envisioned it for CLAIM2.
Achieve goal
The notation AchieveGoal(t, P, f) is read as “target condition t using the set of plans P; failing if f becomes true”. An achieve goal succeeds if and only if the target condition t, is met and fails iff the fail condition f is met. The conditions t and f are defined declaratively as states in the environment. Since the environment is bound to change, either of these states could become true while the agent is executing plans to pursue its goals. An achieve goal is only dropped when it either succeeds or fails, independent from its plans. The goal will be pursued indefinitely until it either fails or succeeds. Figure 2 illustrates the life-cycle of an achieve goal.
**Perform goal**
The outcome of a perform goal, unlike the outcome of an achieve goal depends on the execution of a plan. A perform goal succeeds when a plan corresponding to a goal is executed and fails when no corresponding plan is found to execute. The perform goal has a redo option to perform the goal iteratively. The goal is dropped either if the goal succeeds or fails. The life-cycle of a perform goal is shown in figure 3.

**Maintain Goal**
The purpose of a maintain goal is to maintain a certain state in the environment. Once the maintain goal is adopted the agent will observe the state in the environment as specified in the maintain condition. Once the maintain condition is violated the agent will come in action to re-establish the desired state in the environment. In some cases the plan has to be executed until a certain target has been reached which can be specified in the optional target condition. This is the case when an agent has to maintain a charged battery. In this case we can specify a maintain condition: "charge state > 20\%", and a target condition: ‘charge state = 100\%’. In this example the agent will start charging the battery when the charge state is lower than 21\% and will keep charging until the charge state reaches the target condition of 100\%. The maintain goal lifecycle is shown in figure 4.

3.2 Goal delegation, cooperative and competitive behavior
Depending on the attitude towards other agents, which can be either cooperative or competitive, an agent can decide to delegate tasks to other agents. Task delegation can be simplified with the use of declarative goals because agents can reason about their goals and can simply request other agents to adopt a certain goal or sub-goal whenever beneficial.
Suppose agent A and agent B have a cooperative attitude towards each other and A asks B to adopt the goal TurnOffLights. Whenever it is more economical for B than for A to achieve this particular goal then B should adopt the goal TurnOffLights. For example, B should adopt the goal if it is closer to a light switch than A in which case the global performance of the MAS would improve.
When two agents have a competitive attitude towards each other, an agent might still try to delegate a goal to another agent. In this case the receiving agent has to decide for himself if it is beneficial for it to adopt the goal, not taking in account the global performance of the MAS but uniquely its own performance.
From this analysis of goal delegation we can conclude that when agents are in a cooperative setting they should accept goals if it is beneficial for the MAS as a whole and that agents in a competitive setting should only accept goals whenever it is beneficial for themselves.
3.3 Goal Priorities
The agent should have a mechanism to decide which goals should be performed first. An easy and intuitive way to build such a mechanism is by assigning a priority level to each top-level goal. The agent will simply choose to perform the goal with the highest priority assigned to it. Assigning priorities to the goals is an easy and intuitive way for a programmer to define in what order top-goals should be performed. In more complex applications, there should be a mechanism that assigns priorities to the goals automatically depending on the environment. This is a point which should be treated in future work as it is not directly related to our research.
When an agent can choose between multiple sub-goals to reach a top-level goal the priorities can be regarded as weights. When a sub-goal fulfills multiple top-level goals, the weights of the top-level goals should be taken into consideration. The sub-goal that fulfills the top-level goals with the highest weight should be chosen.
Sub-goals are also assigned a weight by means of a cost-function which can be defined by the programmer. The optimal path is a sequence of goals and sub-goals in the graph that can be computed by subtracting the cost of the sub-goals from the top-level goals. The path that yields the highest weight after subtracting the costs is the optimal one.
3.4 Representation of the mental state
For our purpose we can define the mental state of the agent as composed of three parts:
1. The knowledge base,
2. the goal base,
3. and the intention base.
The knowledge base contains information from the environment, the goal base contains goals that have been adopted by the agent and the intention base contains the goals and plans that are currently being pursued by the agent.
The idea is to match the knowledge and goal base to be able to select which goals and plans should be added to the intention base for further processing.
The goal base is composed of instantiated goals and sub-goals which can be represented in a graph structure composed of interconnected tree structure with the high level goals as roots of each tree structure. Figure 5 is a graphical representation of the goal base.

Suppose the goal base consists of two top-level goals: Goal1 and Goal2. The integer in the nodes of the top-level goals represents the priority level and the integer in the nodes of the sub-goals represents the cost to perform the goal, which is defined by an arbitrary cost function. Because of the higher priority level of Goal2, the agent will choose to pursue Goal2 first. Now that the agent has decided to pursue Goal2 it must choose between Sub-Goal1 and Sub-Goal2. Choosing Sub-Goal2 would lead to a weight of 3 - 1 = 2 and choosing Sub-Goal1 would lead to 3 - 2 + 2 = 3 because Sub-Goal1 fulfills Goal1 and Goal2. Therefore the agent should choose Sub-Goal1 over Sub-Goal2, because this decision would lead to a higher overall utility.
3.5 Inference of sub-goals and plans
An agent can be in different states. Grossly defined these states are **Idle, Goal Deliberation, Intent Execution, Message Processing, Event Processing**
Another important part is the **Updating of the goal base**, which creates the tree structure from the goals necessary to assess the **Intent**. This mechanism can be seen more as a subroutine than an actual state, so it is not part of the Petri net, but is executed nonetheless after each event or message.
The global execution of an agent can be viewed from this Petri net is shown in figure 6.

**Figure 6: Global execution mechanism of an agent**
Descriptions of the states can be seen below as also a more detailed description of the functionality inside each state. Whenever an agent is idle, it can start generating a plan, when a new message or event is received, the agent postpones this execution and proceeds with updating (if necessary) the goal base and/or environment. Execution is then possibly re-executed.
Idle
An agent is idle, if it has nothing to do, an idle agent will however always be listening for incoming messages and events.
3.6 Finite State Machine Generic Goal
Formalizing Braubach to useful UML-like state diagrams will allow us to achieve a better understanding of the goal life cycle, and gives us the capability to formally check for integrity and correctness.
The diagram of the state machine is given in figure 7 and is further explained underneath.
Each goal is created when its creation condition is met. It can then be adopted by an agent by going to the state **Start_Adopt**. It can then also become an option, when this criterion is met. According to the context condition it will continue to become an option and eventually active, where it may be achieved. Furthermore a goal can always be suspended. See the goal life cycles for more information on each state.
3.7 Illustrative Example
Our scenario is based on the blocks world, which is a scenario most commonly used to show the effectiveness and validity of an agent system; it is a world which consists of a table and several blocks. The blocks can be moved according to a simple rule: A block can only be moved when there is no block on top of it and the target block, the block it wishes to be put on. It focuses on a single agent and its plan generation based on creating a tree from the goal base and finding the most efficient path to reach the highest value of goals in the most efficient way possible. We provide different granularities for the algorithm, where a rough granularity of goals will be less exact but is more rapidly deployed than more precise ones.
We denote the knowledge of the initial state for this scenario as follows
\{ on(A,Table) on(B,A) on(D,C), on(C,Table), clear(Table) \};
The predicate on(X,Y) means block X is placed on top of block Y. And clear(X) means that there is no block on top of block X. The table is always so clear so that blocks can be moved to the table.
The capabilities are denote as:
Move X from Y to Z:
move(X,Y,Z) ::=
pre: \{clear(X),clear(Z)\}
post: \{on(X,Z), clear(X), Clear(Y), ¬clear(Z)\}
Goals are
A1 ::=
type: Achievement
requirements: \{on(C,Table), on(D,C), on(B,D), on(A,B)\}
A2 ::=
type: Achievement
requirements: \{on(A,D)\}
M3 ::=
type: Maintain
maintain-condition: \{on(A,B)\}
target-condition: \{on(A,B)\}
A graphical representation of our scenario can be seen in figure 8.
In the graphical representation we can see the three top-level goals: M3, A1 and A2 as defined previously. A1 is the conjunction of the on(C,Table), on(D,C), on(B,D), on(A,B) predicates, A2 consists of only ON(A,D) and M3 consists of only the ON(A,B) predicate. Underneath the graph consisting of the predicates we can see what these goals look like in the environment. To be able to go from the initial state to one of the goal states we need plan generation which generates the appropriate actions to achieve these goals.
3.8 Challenges to Plan Generation
Additional challenges exist in generating an acceptable plan for achieving goals. There are some specific cases which are worth expanding on. We try to illustrate the difficulty of each of these using a petri net. In each petri net the places represent goals, and transitions are the effects of each action. The core of the problem is that actions needed for goals may have conflicting results, completely or partially negating the other goal.
Competing Needs
When a goal is to be achieved, but the actions to achieve the sub-goals conflict, an action of one sub-goal makes the other invalid, a way to resolve the conflict must be found. One action preceding another may resolve this conflict. The Petri net in figure 9 shows this issue. As you can see, when trying to achieve the top-level goal P4, both P2 and P3 need to be achieved. To do this actions T0 and T1 need to be executed. Executing T1 before T0 however results in a deadlock, where P2 can no longer be achieved as there is no longer a token at P0 in order to execute T0. Executing T0 first and then executing T3 will allow tokens to be available to T2, making P4 attainable once more.
Figure 9: Petri net representation of competing needs
Here the places represent goals, and the transitions mean to achieve them (actions). P4 represents the (top-level) goal we wish to achieve. By executing T1 before T0, the sub-goals and thus the top-goal can still be achieved.
**Interference**
When two goals have conflicting goals, where either fulfillment excludes the other, there is an unresolvable conflict. This can be seen from the petri net in figure 10.

**Figure 10: Petri net representation of interference**
Executing capability 1 will make execution of 2 impossible and vice versa. So only 1 sub-goal can be achieved, and thus the top-goal will never be reached. Recognizing these difficulties is very important for an agent to effective in its execution. How to deal with these situations will be explained later on.
Subsumption
We defined the subsumption of one capability’s execution realizes a goal, which was first thought to have to be validated through another capability. This example can be seen in figure 11.
Figure 11: Petri net representation of subsumption
Here execution of T2 realizes P2 as well, making T3 obsolete.
### 3.9 Plan Generation
Plan Generation is the key aspect to introducing goal-driven behavior in S-CLAIM. It is needed to convert the initial state of the environment to the goal state as defined by the programmer by means of declarative goals. Plan generation should take the initial state and the goal states and generate a plan which consists of actions that have to be executed in the environment to achieve the goal states. We use an approach of defining a *GraphPlan*[8] graph, processing mutual exclusion relations (mutex), converting it to a SAT instance and solving these with an arbitrary SAT solver. A plan can then be extracted from the SAT solution. This method has been described by Kautz et al. in SatPlan: Planning as satisfiability.[6][7]
**Graph creation**
We create our graph according to the GraphPlan[8] methodology. We intend to use the graph creation and mutex computation from it, but not the actual plan creation, as the SAT conversion is faster and more modular (an arbitrary SAT solver can be used).
Graphs used in GraphPlan are directed, leveled graphs. Leveled meaning there are several distinct sets, such that edges only connect between two adjacent levels.
The nodes of these graphs consists of two kinds. There are proposition nodes and action nodes. The proposition nodes are presented as states of elements of the environment, are either goals, post- or preconditions. Action nodes are the actions of an agent with directed edges to pre- and post-conditions.
Edges represent relations between actions and propositions of two adjacent levels. actions in level $i$ are connected by precondition-edges to their preconditions propositions of level $i - 1$. The post-conditions in level $i + 1$ are connected to actions by use of add=edges and delete-edges. A no-op relation defines a “no operation” semantic, where a proposition is already true it is extended along with other propositions to the next action level using this kind of edge.
The levels of nodes alternate between being actions and propositions. Each proposition which is not true in the current agent’s environment has an action added to it with a directed edge between the action and the proposition node. Other post-conditions an action may have are also added to the graph. Each post-condition may be of two kinds, it may be an add-effect or a delete-effect corresponding to a proposition that is added to the environment and one that deletes a proposition from an environment.
**Creation of the graph:**
first_layer = initial state
current_layer = first_layer
while level < max_level
actions_layer = actions with preconditions in current_layer
current_layer.next = action_layer
action_layer.next = post conditions of actions in action_layer
current_layer = action_layer.next
level++
break if
all goals are reachable
graph levels off
endwhile
plan = actions to achieve these goals.
**Finding the goals and plans**
The goals and plans are found by searching the graph. The goals are said to be found if they are contained in the last layer of propositions. The layers are added as long for as long as not all goals are in the final layer (or is aborted if none can be found).
Note that any goal that is any other layer than the last layer is also in the last layer, as a NOOP action is simply used in the intermediate layers to propagate the proposition’s existence unto the intermediate layers and the last layer.
When the goals are all in the last layer, the plan is found as follows:
Each action level is searched for actions supporting the goals and is added to the plan list. This returns the action sequence or plan after having traversed all the action layers.
**Limitations**
The current algorithm has some limitations, which constrains its usability for S-CLAIM severely. First and foremost, a plan is only generated when all the goals are met. For the algorithm to be of use to us, we would like it to return a set of sets of goals which can be achieved. These goal sets would be mutually achievable. The plan to reach each of these goal sets would also be needed to be returned.
Furthermore the algorithm does not account for priorities of goals and the costs involved of invoking actions.
To control the amount computations of the GraphPlan instance, a constraint should be placed on the graph creation. This constraint is already available in the current GraphPlan implementation, but it simply fails.
when not finding the desired goals. Improvements should be made to at least return the goals found and their respective plan back to the agent.
**Additions**
To resolve these matters with the current graphplan algorithm we propose to alter it in a fashion that will suit the needs of S-CLAIM.
First of all it is necessary to clearly distinguish the agent from the graphplan instance. We would like the agent to be clearly distinct from the graphplan. They would then only communicate by a clearly defined protocol.
The protocol can be described like this:
The agent sends to the GraphPlan instance it’s goals (or a subset of it) to the graphplan.
The graphplan the computes a set of sets of compatible goals with their appropriate planning back to the agent.
The agent then considers which of these goals sets are best to achieve given the costs of the capabilities and the priorities of each goal.
Note that the graphplan has no idea of costs and priorities involved with the goals and capabilities. Even though this knowledge could improve overall calculation speed this is done to keep (data)dependencies low.
It also ensures, the graphplan can easily be extended by any sort of SAT converter and solver like we discussed before, as these do not (yet) work with costs and priorities.
Another important note is that the amount of goals sent to the GraphPlan instance needs to be constrained in order to keep it computationally tractable. The amount of goal sets which can be derived from n goals is n!. Which is in EXPSPACE. By adding constraints of a maximum number of goals, the overall efficiency of the plan creation can still be held at an acceptable level. One could consider that the agent only sends the m highest ranking goals (in terms of priority) are send to the graphplan instance.
Other mechanisms can be applied as well. Memoization of certain goals and their costs/benefits can be used as a heuristics for determining whether it should be included in the set of goals sent to the graphplan for plan creation.
3.10 Illustrative GraphPlan example
In this example we will illustrate how graphplan works by means of an example by elaborating on the illustrative example used in 3.7.
Suppose we have defined the goals as seen in figure 12.

**Figure 12: An initial state and a maintain goal state**
Suppose we want to generate a plan from the initial state to achieve the A2 goal state. The initial proposition layer will consist of the initial state of the environment:
\[
\text{ON}(A, \text{Table}), \text{ON}(B, A), \text{Clear}(B).
\]
The following proposition always holds:
\[
\text{Clear}(\text{Table}).
\]
The goal state we want to achieve consists of the following propositions:
\[
\text{ON}(A,B).
\]
The possible action is:
\[
\text{MOVE}(X,Y,Z).
\]
Which stands for move block X from block Y to block Z where X, Y and Z are different blocks and either Y or Z may be the table.
Figure 13: Resulting graph from GraphPlan execution
Figure 13 shows how the graph would be created by a GraphPlan algorithm, with some adjustments for readability; The principal however, still remains valid, only superfluous nodes and edges were removed. The P’s and A’s at the side indicate a proposition layer and action layer respectively. The node with the red contour is indicated as the goal.
Creating the graph above happens as follows. The top layer, a proposition layer, is initiated as the given initial environment. Actions are generated for the given objects (in this case the blocks). Whenever an action’s preconditions are met by the proposition layer, it is added to the next layer. The proposition layer is created from the action layers. Post-conditions of the action layer become the next proposition layer. This cycle is continued until the goal condition is found in one of the proposition layer (or alternatively none is found or the maximum amount of created levels is reached). Here ON(A,B) is reached in the fifth layer so graph creation halts. When graph creation is successful, collecting the appropriate actions is necessary for the graphplan to be able to return a viable plan. This is done by backtracking back from the goal to the initial layer. The action required to fulfill ON(A,B) is added to the collection (here MOVE(A,Table,B)). The actions that make the preconditions CLEAR(A), CLEAR(B), On(A,Table) true are then added to the collection until the initial environment is reached. The collection is then returned as the action sequence, or plan, which will lead to the desired environment state.
3.11 Goal Representation in S-CLAIM
In order to introduce goal-driven behavior in the S-CLAIM agent programming language an appropriate goal representation and syntax have to be defined. The aims are to choose a goal representation so that the process of implementation is simplified for the programmer and to make the goal-driven behavior compatible with other aspects of a multi-agent system and the planning algorithm.
Overview
The goal representation must account for the different goal types. We can distinguish between the three following types of goals: achievement-, maintain- and perform goals. Because of the declarative nature of goals they can be represented as a k-tuple of k propositions describing a desired state in the environment which has to be reached or maintained by the goal. In case of the performance goal, the goal would consist of a plan that should be triggered when adopting the perform goal.
The goal representation has to be defined from two different perspectives:
1. the programmer’s perspective and
2. the agent perspective.
To define the goal representation from the programmer’s point of view we have to think about a suitable syntax which is compatible with the existing S-Claim syntax and its semantics. The agent’s capabilities have to be defined so that it can pursue its goal driven behavior and perform actions in the environment. The goals have to be defined as a conjunction of valid states in the environment in the form of a conjunction so that the GraphPlan algorithm can generate a plan to pursue the goals.
When the goals and the actions of the agent have been defined and programmed by the environment, they have to be parsed so that the agent can store the goals in its knowledge base so that it can start reasoning about them and start pursuing them proactively. The goals such be stored in the knowledge base in such a way that the agent can efficiently use the plan generation algorithm to determine its course of actions to act proactively.
In both cases we need to keep in mind that a goal has a priority assigned to it.
Syntax
We have argued that we need a goal representation from the programmer’s perspective for the programmer to be able to introduce goal-driven behavior in an agent. We therefore have to define a syntax which is compatible with the existing S-Claim syntax so that it can be used efficiently by the programmer.
Since we have three different goal types. We must introduce a specific notation for each goal type. The maintain goal consists of two states: the maintain condition and the target condition.
This means that the maintain goal takes a name parameter, two state parameters and a priority parameter, which can be defined as follows m-goal(String goal_name, Integer priority, Conjunction maintain_state, Conjunction target_state), where a conjunction is a sequence of states in the environment joined by conjunctions. Where the goal_name parameter defines the name of the goal, the maintain_state parameter describes the state that has to be maintained by the agent, the target_state which describes the state that has to be reached while pursuing plans to reinstate the maintain_state and the priority which describes the priority of the goal.
The **achieve goal** takes a name parameter, two state parameters and a priority parameter which can be defined as follows: `a-goal(String goal_name, Integer Priority, Conjunction target_condition, Conjunction fail_condition)`. This representation is similar to that of the maintain goal, the only difference is the `fail_condition` which describes the state of the environment in which the goal should be dropped.
The **perform goal** is a particular kind of goal because it does not describe a state in the environment. The only thing a perform goal needs to be able to do is to trigger a plan. We can define a perform goal as follows: `p-goal(String goal_name, Action action)`. Where ‘Action action’ stands for an action to be performed in the environment.
**Goal base**
The other aspect of the goal-representation that has to be defined is the aspect from within the agent. Once the goals have been parsed they will be added to the knowledge base of the agent. To make a distinction between goals and other knowledge we will define a **goal base** which is equivalent to the knowledge base in essence. The only difference is that the goal base consists of goals exclusively. The separation of goal and knowledge base makes it easier for the agent to reason about its goals and to choose what kind of computations it may do on its goals. The goal base will consist of a list of adopted goals which may be in the option, active or suspended state.
It should be possible for the agent and the programmer to query the goal base by using the goal name and / or goal type to request certain information about the goal, for example its state. The agent will make use of the goal base to request plans from the plan generator.
4 Software design & Implementation
4.1 Software Design
In this part of the report we will explain the design of the implementation part of the project by giving an overview of the packages and classes.
The five following packages have been used for the implementation as shown in figure 14:
1. The Claim package,
2. the JADE package,
3. the Parser package,
4. the Planning package and
5. the GraphPlan package.
The Claim package contains all the core classes used by CLAIM such as the CLAIMAgent and the Parser package.
The JADE package contains all classes involved in the JADE platform which is used by all CLAIMAgents in order for it to have all functionality (such as migration, communication etc.)
The Parser package contains all classes necessary for parsing, lexical analysis and processing of files. It is used to create agents from user-defined text. We have modified this part so that it can now parse goals.
The Planning package contains the implementation that has been added by us from scratch. This means that it contains the goal-driven behavior functionality of the agent, including the environment and GraphPlan.
The GraphPlan package contains the computation part of the planning. This package contains all the functionality to convert a goal to an action sequence of applicable actions.
The previous image shows an overview of the implementation by depicting the most important packages and classes with their relevant attributes and methods. The planning package contains the functionality that we have implemented to obtain goal-driven behavior. The parser package has been modified to be able to parse the user-defined goals.
### 4.2 Implementation
By implementing our concept, we have constructed a proof of concept which demonstrates goal-driven behavior in the Claim platform. The details of the implementation will be explained in this section. We have extended the existing parser and lexical analyzer of the existing S-Claim language to be able to support declarative goals. We have created java classes to implement the goals, the goal agent and the blocks environment. The planning algorithm is a modified version of the GraphPlan algorithm.
**Parser**
The parser and lexical analyzer that has been used in the CLAIM project is BYACC/J[9]. The reason that this parser has been chosen is because it is a well-known parser with a lot of documentation and compatible with Java.
The parser was edited so it could parse the keywords necessary for creating a GoalAgent. Next to the usual reactive behavior, another behavior was added to the behavior types, called proactive. Proactive behavior is used to define the goal-driven behavior of the Claim agent.
Then the different goals were added along with their possible arguments and goal-specific keywords such as maintain, achieve etc.
The arguments of goals were added to the parser as a new kind of argument_list, called proposition_list, this was made to make it compatible with arguments (or propositions) which contain parentheses and commas, which weren’t allowed in the arguments_list. Also the arguments_list consisted of arguments, which could also be functions and other elements we did not want to have in the goals.
Lexer
New tokens had to be added in order for the parser to understand these tokens. As such the keywords mGoal pGoal and aGoal were added as tokens. This way the parser is able to understand where and what kind of goal it has to parse.
GoalAgent
The GoalAgent class contains all the mechanisms that implement the goal-driven behavior of the agent. This class extends the basic ClaimAgent class to add goal-driven behavior. Just like a claim agent, a goal agent is created by using the Jade platform. When a .adf2 file is parsed containing proactive-behavior, a goal agent is created. This happens in boot.java with the following line of code:
```
allAgents.put(agentName, new AgentCreationData(agentName, GoalAgent.class.getCanonicalName(), new Object[] { cad, agentType, parameters, knowledgeList, null, environmentAgent}, containerName, !doCreateContainer));
```
In the setup phase, which is defined by the `setup()` method inside the goal agent, the agent is registered in the environment and it receives the initial state of the environment which is saved as a list called latestEnvironmentState, it also constructs its goal base which is the set of goal states as defined in the .adf2 file. At the end of the setup, the `initGoalStates()` method is called. This method goes through all the goals in the goal base, searching for the perform or achieve-goal with the highest priority and sets its goals tate to “active”. It also sets all the maintain goals to the “active” goal state. This way the agent knows which goals are active and could be used to calculate a plan so that they can be performed in a later stadium.
At the end of the setup phase we define the cyclical behavior of the agent. The cyclical behavior consists of two methods which the agent is going to run indefinitely: the `updateGoalStates()` and `execute()` methods.
The `updateGoalStates()` method goes through all the active goals in the goal base and checks how the goal state defined by the goal corresponds to the environment. In the case of a maintain-goal the MaintainGoalState of the maintain-goal is set to IDLE if the target state of the maintain-goal corresponds to the states in the environment, it is set to IN_PROCESS if the target state of the maintain-goal does not correspond to the environment. In the case of an achieve-goal, we check if the goal-state corresponds to the environment state, if it does the goal has been achieved and its goal state will be set to SUCCES. We then remove all the goals with goal state SUCCES and FAIL, we now have to call the `initGoalStates()` method again to make sure that we have at least one active achieve-goal again with the currently highest priority.
The `execute()` method is used to select the active goal with the highest priority, to generate a plan for the selected goal and to execute it in the environment. In the case of an achieve-goal, a plan is generated by invoking the `computePlan(goal)` method, where goal stands for a goal for which the plan is generated and the actions of the plan are executed sequentially in the environment by invoking `environmentAgent.applyAction(action);`. In the case of a maintain-goal, a plan is only computed and executed if the goal is in the IN_PROCESS state, otherwise no plan is computed and generated. Figure 15 shows a cycle in the execution of the GoalAgent as described above.
Figure 15: Sequence diagram showing a cycle in the GoalAgent execution
EnvironmentAgent
The EnvironmentAgent class simulates a blocks world environment. The blocks world environment is a commonly used example in the Artificial Intelligence field. The environment consists of a table, and a number of blocks which can be placed on each other. The goal is to reach a certain configuration of blocks in the environment. The environment contains a list of strings which represents the state of the environment. For example when we have three blocks: b1, b2 and b3. A state could be ON(b1, Table), ON(b2, Table), ON(b3, Table), Clear(b1), Clear(b2), Clear(b3), Clear(Table), in which all these predicates are true. We follow a closed world assumption, so when a certain predicate does not exist in the environment we consider it to be false.
When initialized, the environment reads the environment, its possible actions and the initial state from two files. To do all this we have used the implementation of JPlan which defines a blocks world environment. Whenever an agent wants to ‘enter’ the environment, the agent has to register itself by calling the registerGoalAgent(GoalAgent agent) method. To apply an action in the environment the agent has to call the applyAction(Action action) method. This method checks if the precondition of the action are true, if so it adds all the add effects of the action and deletes all the delete effects from the environment states. Now that the action has been applied the environment agent will send the updated environment to all the agents that have been registered with this environment.
Goals
We have defined three kinds of goals: achieve-goals, maintain-goals and perform-goals. The achieve-goal is implemented in the ClaimaGoal class, the maintain-goal in the ClaimmGoal class and the perform goal in the ClaimpGoal class, all these classes extend the ClaimGoal class. After the parsing of the .adf2 file, the corresponding objects will be created and added to the goal base of the agent. The behavior mechanisms of the agent are controlled by the GoalStates, as defined in the enum GoalState of the goals in the goal base. The superclass ClaimGoal has the following Goal states: FAIL, SUCCESS, ACTIVE, OPTION or SUSPENDED. The maintain-goal has more specific states as defined in the enum MaintainGoalState, which are: IDLE and IN_PROCESS.
Illustrative Example
Suppose we want to create an agent with goal-driven behavior in the blocks world environment. In this illustrative example we will show how this can be done using the S-Claim platform.
First we must define a scenario. This scenario is an XML file which defines the agents and their parameters. To define an agent as a goal-driven agent, we have added the possibility of adding a new parameter called "goaldriven". When this is added to an agents parameters, it will be initialized as a goal-driven agent. Below you see these parameters as defined by <scen:parameters> tag. The goal-driven parameter is defined as <pr:param name="goaldriven" value="true"/> Figure 16 shows what the XML scenario looks like.
Now we must define the behavior of the agent in the corresponding .adf2 file. First we define the agent called BlocksAgent and then define its proactive behavior. Figure 17 shows what an .adf2 file looks like.
In this example we have an agent with two goals. One is an achievement goal with achievement state ON(b2, b1) and ON(b3,b2) with priority 3 and the second is a maintain goal with maintain state ON(b3,Table) with priority 1. In the initial state all three blocks are on the table.
When we run this code the screens as shown in figure 18 appear.

Figure 18 shows the agent simulator and other inspection tools from jade. We now have to press the ‘and Start’ button to run our agent to see what happens.
```
GoalAgent computing plan
ON(k1, Table) & ON(k2, Table) & ON(b3, Table) & Clear(b1) & Clear(b2) & Clear(b3)
ON(b1, b2) & Clear(Table) & ON(b1, Table) & Clear(b2) & ON(b3, b2) & Clear(b1) & ON(b2, b3) & ON(b3, b1) & ON(b1, b3) & ON(b3, Table) & ON(b3, b2)
```
Graph Created Successfully
Generated plan: [Move(b2, Table, b1), Move(b3, Table, b2)]
Performing action: Move(b2, Table, b1)
Environment: [ON(b1, Table), ON(b3, Table), Clear(b2), Clear(b3), ON(b3, b2), Clear(Table)]
Performing action: Move(b3, Table, b2)
Environment: [ON(b1, Table), Clear(b1), ON(b2, b1), Clear(Table), ON(b3, b2)]
GoalAgent computing plan:
ON(b1, Table) & Clear(b3) & ON(b2, b1) & Clear(Table) & ON(b3, b2)
Graph Created Successfully
Generated plan: [MoveToTable(b3, b2)]
Performing action: MoveToTable(b3, b2)
Environment: [ON(b5, Table), Clear(b5), ON(b2, b1), Clear(Table), ON(b3, Table), Clear(b2)]
```
Figure 19: Output when running an agent
Figure 19 shows the output that is generated by our code. Because of the higher priority we first execute the achieve goal. The generated plan is Move(b2, Table, b1) and Move(b3, Table, b2). After perform these actions sequentially in the environment the goal has been achieved. After an achievement goal has been achieved it is removed from the agent’s goal base because it has been tagged as successful. The only remaining goal is the maintain goal. The agent generates the plan MoveToTable(b3, b2) and executes it. The maintain goal will never be removed from its goal base and a new plan will be calculated each time its maintain condition is violated.
4.3 Testing
Testing is a very important part of software development. Considering the nature of our project however, testing had an only limited priority. Implementation was, as was earlier stated, only used as a proof of concept. Therefore, the implementation did not need to be completely tested through. It should however show that our proof-of-concept is valid and thus that our research and idea are workable. We used some manual testing, where we tried to find a combination of types of goals with different priorities. Pretty much all combinations of goal types were used. We also combined different priorities, looking for the critical areas for the priorities (all the same, one slightly higher or lower, much higher or lower, etc.). By looking at these different combinations and matching the results with expected results (which was easy to check) we confirmed that our implementation worked as expected and thus that our research and idea were indeed valid.
Further testing by LIP6 should certainly be done, especially when the goal agents are extended to fulfill more complicated functionality, it would be interesting to see what would happen when, for example, an agent would migrate to another computational device. How would this affect the efficacy of the agent?
5. Conclusion & Future Work
In this report we have tried to describe the most important aspects of the theory underlying multi-agent systems and specifically the goal-driven behavior involved with these systems. By creating a theoretical framework of current work and proposed solutions we have attempted to propose a well-researched viable solution to the question of how to introduce the goal-driven behavior in the S-CLAIM language, namely by using both declarative and procedural aspects of goals. Where the declarative part is defined in the goal and the procedural part is computed by a planning algorithm.
We have proposed using the different goal types of Braubach, because of its intuitiveness, compactness and allowing for a great amount of different functionality. We have used the STRIPS language as a way to easily declare the goals, and by using a well-understood and seasoned algorithm such as GraphPlan to accomplish the plan generation. We have proposed that the GraphPlan can be extended even further to allow for faster computation, by converting the GraphPlan instance to an SAT problem. If modularity is properly maintained, it is possible to use arbitrary SAT solvers to compute a plan efficiently for any agent.
In the future the implementation should be finished in more detail, our aim was to make a concept of proof. The planning is currently based on the GraphPlan algorithm and works, but it was not possible to implement the SAT conversion due to time constraints. Proper research should be done into how to do this most efficiently. There are algorithms describing the conversion, but as far as we know, none of these take into account the costs and priorities of actions and goals. Modifying the SAT to include these parameters would increase its usefulness.
The environment agent we have used, was in fact not a true JADE powered agent, it was simply a mock-up, or placeholder for a future agent. Designing a true EnvironmentAgent driven by JADE would increase its usefulness and give greater insight in its efficacy in a more complex environment.
In the future more testing should be done using goal-driven agents with more complex goals and goal combinations. Our testing was limited to simple goals. By using more complex goals and goal combinations a better simulation of real world circumstances can be recreated. That way it would be possible to better assess the performance of the platform.
6. Bibliography
[3] Claim & SyMPA. Alexandru Suna
[5] Goal Types in Agent Programming, Birna van Riemsdijk
[7] Planning as Satisfiability, Henry Kautz
[8] GraphPlan, Avrim Blum
Appendix A: Project Plan
1. Introduction
The project plan gives an overview of the activities that have been formulated to address and solve the problems associated with our Bachelor project, which consists of introducing goal-driven behavior in the S-CLAIM programming language. This chapter will is an introduction to the project plan, describing the origins of the project. Chapter 2 describes the project assignment in detail. Chapter 3 describes our approach to the project. Chapter 4 describes the organization of the project. Chapter 5 describes how the project has been planned and chapter 6 describes how we are planning to ensure the project quality.
Origin
S-CLAIM (SMART- Computational Language for Autonomous, Intelligent and Mobile Agents) is a programming language that has been designed to program mobile multi-agent systems by the computer science laboratory, LIP6 of the UPMC University of Paris. One of the main goals of the language is to enable people without any programming experience to implement the behavior of the multi-agent system. To achieve this goal the language must contain high level constructs which are easy to understand for non-programmers. For this reason it was decided to introduce goal-driven behavior in the S-CLAIM programming language. The formal project assignment can be found in the next chapter.
2. Project assignment
This chapter will describe the assignment that we have been given by LIP6.
Project environment
The project environment consists of the research department called LIP6 at the UPMC university at Paris. This is the sole organization immediately involved in our project, which was deemed S-CLAIM. Further explanation of S-CLAIM and its constituents can be found in our report. A professor was involved in our researcher as were a few PhD students, with one PhD student giving us direct supervision on a day-to-day basis.
Project goal
The goal of the project was to research how the S-CLAIM project could be further enhanced, more specifically; research was needed towards goal driven behavior. It is important to note that effective research was much more important than actual implementation. The research should be documented for further use.
Assignment Description
The assignment is to research and implement some goal-driven aspects into the existing S-CLAIM project. Implementation should allow users to implement goal driven behaviors
using a given set of language constructs. Research should give a state-of-the-art description of goal-driven behavior and an approach to implement it. Some implementation should be done to confirm the validity of the research done (see appendix A).
**Product- and service deliverables**
Research was the main part of this project. A report should be delivered containing the findings of our research as a basis for further advancement of the S-CLAIM system. Implementation had to encompass a simple yet viable solution to the given problem of producing goal-driven behavior. Functionality should include the ability to define the goal driven behaviors of agents, plan generation and execution of action to modify the environment.
**Requirements and limitations**
Most importantly there should be some substantial research done into the field. Understanding the current state of affairs is paramount. Furthermore, possible solutions should be researched and clearly defined. As the report created from the research would serve as a guide for future work, the report had to be self-sufficient, that is, it should be more or less sufficient for a future researcher to continue our work. Demands for implementation were solely that they would proof the efficacy of our research. It is important to note that the research itself would still be useful without implementation, it would still gain insight in the domain. Therefore the research had a much higher priority than the actual implementation. Limitations were not very obvious, except for the limited amount of time we would be able to counsel with others in the field, there were no further financial or material resources required for us to do our job.
**Crucial success criteria**
The project could be deemed a success if we would be able to produce a satisfactory report on the research done. As long as the project supervisors were pleased with that which was produced, the project could be deemed a success. Implementation would also have to meet certain quality criteria, but these were never explicitly stated by the supervisors involved, but were nevertheless met.
3. **Project Approach**
This chapter will answer the questions on how we will address and solve the problems which have been posed in the preceding chapter of this project plan. The aim of this chapter is to bridge the gap between the project assignment and the desired results which have been defined in chapter 2.
Approach
The project consists of two main activities.
1. Firstly doing research on how to define the goal structures in S-CLAIM, how to use these structures to define the internal mechanisms of the agent, and how to generate plans considering the agent mechanisms and the goal structures.
2. Secondly implementing a proof of concept of the ideas that have been developed during the research on top of the existing S-CLAIM platform.
The focus of this project will lie with the research, there are two reasons for this approach.
1. The project assignment was envisioned as a research project by the LIP6 laboratory. The focus of the LIP6 laboratory lies in research not in software engineering, therefore it is only natural that a project at this institute will mostly consist of research.
2. LIP6 has granted us a high degree of freedom to come up with a solution to introduce goal-driven behavior in S-CLAIM. This degree of freedom comes with the responsibility of doing more research than usual to make sure the ideas are well envisioned and have a greater possibility of success.
The approach consists of doing roughly 2 months of research to assess the current state of the art in field of multi-agent systems, goal-driven behavior and plan generation. The focus will lie with how the latest developments in these fields can be used to successfully introduce goal-driven behavior in S-CLAIM.
After the research 1 month will be dedicated to introducing the researched notions into the existing S-CLAIM platform. To be able to introduce the appropriate constructs the lexical analysis and parser of S-CLAIM have to be modified to be able to handle goal constructs that are defined in the source-code of the agent. Consequently the existing S-CLAIM agent has to be extended to implement the envisioned internal mechanisms to handle the goal constructs that have been defined in the agent’s source code. Finally when the goal handling mechanisms of the agent have been implemented a plan generation algorithm has to be implemented so that the agent can generate plans to achieve the specified goals.
Risk Factors
The freedom that we have been granted to us in solving the problems described above obviously come at a risk. As we are relatively new to multi-agent systems there is a risk that our approach will fail as we do not have enough experience in this field. To minimalize this risk of failure we will work closely with our supervisor Cedric Herpson who has a great deal of experience in these fields. We will also have to gain more knowledge on multi-agent systems by reading more about the basics. An introduction to Multi-Agent Systems by Michael Woolridge is a good example of a book to freshen up our multi-agent systems knowledge.
Quality Requirements
The quality of our research will continuously be assessed by our supervisor at is it is very difficult to define the quality of the research beforehand. The requirement of the quality of
the implementation part is that the ideas that have been developed during the research will be able to be demonstrated in a simple blocks world scenario.
4. Project Organization
The goal of the project organization is to create a more transparent overview of the project organization.
Organization
There were no official roles or responsibilities assigned specifically to us, but were more generally assigned to the both of us.
Staff
Even though many people are involved and actively working for the S-CLAIM project. Only a few people were actively involved in our work. This was prof. Seghrouchni, the head of the department and the lead researcher. She was the one who took us in for the project and was also the person who held final responsibility for the project. Cedric Herpson was a PhD working under Prof. Seghrouchni and was our direct supervisor. Due to the many responsibilities of Prof. Seghrouchni, she left the day-to-day supervision of our project up to him.
Administrative procedures
Administration consisted of filling in forms and supplying documents to get internet and network access at the university compound, a key to access our office at the LIP6 department and verification of our origin at TU Delft and as a student in general along with agreement of Prof. Seghrouchni in order to receive our monthly financial compensation.
Financial organization
Except for our monthly financial compensation no additional costs were made.
Requirements for contractor
It was agreed that the provider would make sure that we were financially compensated for our stay in Paris.
Opsomming van voorwaarden, die gerealiseerd dienen te worden door de opdrachtnemer om het project volgens plan te kunnen uitvoeren. Deze voorwaarden zijn gerelateerd aan en aanvullend op de inrichtingsaspecten.
Requirements for principal
We would do research at LIP6. No specific requirements about office time were made, but of course the project had to be finished satisfactory.
5. **Planning**
This chapter will describe the planning of the project.
**Assumptions**
The assumptions that are made regarding the planning are that background studying on multi-agent systems is done in spare time. The research paired with goal-driven behavior and planning are assumed to be included in the available hours in the normal 40 hour workweek. The duration of the project is assumed to be 3 months of 40 hour workweeks.
**Activities**
As mentioned before the project consists of two main parts. A research part and an implementation part. The research part will consist of 2 months of research on the following points.
1. Defining appropriate goal structures in S-CLAIM to be able to introduce goal-driven behavior.
2. Design mechanisms internal to the agent to handle the goal structures.
3. Define the plan generation so that the agent can generate a plan to achieve its goals.
During the research of these points we will also be finding out what the current state of the art of the fields encompassing these points is. Two months will be needed to carry out the research part.
After the research part one month will be used for the implementation. The implementation will consist of the following points:
1. Modify the lexical analysis and parser of the currently implemented in S-CLAIM so that it can handle declarative goals.
2. Extend the current S-CLAIM agent by implementing an internal mechanism that handles the goal structures.
3. Implement plan generation so that a plan can be generated to achieve the specified goals.
These activities will need 3 months of time which is the specified time of a bachelor project.
6. **Project Quality**
The delivered product should consist of a research report which shows the understanding of the domain and provides a guide for future research. This has the highest priority and is acceptable when the supervisors qualify it as acceptable when it is extensive enough, knowledgeable and understandable.
---
**Process Quality**
Requirements to be met:
- expertise in domain
- communication (the results found need to be communicated accurately)
---
**Proposed measures**
Measures used to maintain quality were based mostly on verbal and non-verbal feedback. Presentations were given to the supervisors on a monthly basis as well as to colleagues in the field. Afterwards a review was presented to us after deliberation.
Appendix B: Research Proposal
Amal El Fallah Seghrouchni
Professor at the University Pierre & Marie Curie
Head of SMA team / Delegation at the CNRS
4, Place Jussieu - 75252 Paris Cedex 05
Amal.Elfallah@lip6.fr
Paris, 8th of July 2011
Subject: Research Proposal for Gerard Simons & Alex Garella, 1st September 2011 – 1st December 2011
Title: Goal-Driven Behavior for CLAIM agents
Advisor: Prof. Amal EL FALLAH SEGHROUCHNI
Team Project: Cédric Herpson, Andrei Olaru, Nga Thi Thuy and Marius Tudor Benea
Description
The CLAIM programming language (Computational Language for Autonomous, Intelligent and Mobile agents) has been developed at the LIP6 laboratory [Suna and El Fallah Seghrouchni, 2004], with the purpose of offering an easy way of programming cognitive mobile agents, using a simple language, without the need for the programmer to know any more advanced programming languages, like Java or C. Once the code is written in CLAIM, it was executed by the Sympa platform, written in Java. CLAIM allows for easy implementation of mobile agents that have reactive or proactive behavior. The functioning of CLAIM agents is also inspired from the mobile ambients of Luca Cardelli [Cardelli and Gordon, 2000].
After successfully demonstrating the usefulness of CLAIM in Ambient Intelligence applications [El Fallah Seghrouchni et al 2010], a team at LIP6 (Andrei Olaru, Thi Thuy Nga Nguyen and Marius-Tudor Benea) started in 2011 the development of a new platform for the execution of CLAIM2 – a more simple variant of CLAIM. This platform will have improved organization and capabilities, and will be based on the Jade agent development framework. It will also feature a component for CLAIM agents executing on mobile devices (Android smartphones), which is currently being developed by Marius-Tudor Benea.
In this context, this stage will focus on the introduction of goal-driven behavior in CLAIM2. Goal-driven agents are able to act in an autonomous manner, by reasoning on solving the goals and taking appropriate action [Braubach et al, 2005]. More precisely, the stage requires the implementation of the language constructs that allow the programmer to specify goals for the agents, and of new types of agent behavior.
by means of which the agent will try to fulfill its goals. These elements will be integrated in the CLAIM2 platform developed at LIP6.
References
|
{"Source-Url": "https://repository.tudelft.nl/islandora/object/uuid%3Ac4226726-ab37-4e24-b7a0-14d1bec35eb6/datastream/OBJ/download", "len_cl100k_base": 15178, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 88786, "total-output-tokens": 17406, "length": "2e13", "weborganizer": {"__label__adult": 0.0003247261047363281, "__label__art_design": 0.00048160552978515625, "__label__crime_law": 0.0002624988555908203, "__label__education_jobs": 0.0015707015991210938, "__label__entertainment": 6.365776062011719e-05, "__label__fashion_beauty": 0.0001552104949951172, "__label__finance_business": 0.00026297569274902344, "__label__food_dining": 0.0002779960632324219, "__label__games": 0.0006532669067382812, "__label__hardware": 0.0006470680236816406, "__label__health": 0.00029587745666503906, "__label__history": 0.0002663135528564453, "__label__home_hobbies": 0.00011008977890014648, "__label__industrial": 0.0003361701965332031, "__label__literature": 0.00023734569549560547, "__label__politics": 0.00022614002227783203, "__label__religion": 0.0003752708435058594, "__label__science_tech": 0.00937652587890625, "__label__social_life": 0.00010120868682861328, "__label__software": 0.004283905029296875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0002810955047607422, "__label__transportation": 0.0004925727844238281, "__label__travel": 0.00017940998077392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75565, 0.01425]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75565, 0.78331]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75565, 0.93956]], "google_gemma-3-12b-it_contains_pii": [[0, 62, false], [62, 3211, null], [3211, 6462, null], [6462, 9472, null], [9472, 12964, null], [12964, 14675, null], [14675, 16951, null], [16951, 17639, null], [17639, 19074, null], [19074, 21838, null], [21838, 23492, null], [23492, 24560, null], [24560, 25025, null], [25025, 25445, null], [25445, 27035, null], [27035, 27559, null], [27559, 28800, null], [28800, 29635, null], [29635, 29952, null], [29952, 32436, null], [32436, 34432, null], [34432, 36470, null], [36470, 37397, null], [37397, 39032, null], [39032, 42273, null], [42273, 43999, null], [43999, 45314, null], [45314, 47218, null], [47218, 50590, null], [50590, 50661, null], [50661, 53708, null], [53708, 54199, null], [54199, 56072, null], [56072, 57355, null], [57355, 59791, null], [59791, 60317, null], [60317, 62728, null], [62728, 65178, null], [65178, 68138, null], [68138, 70113, null], [70113, 71763, null], [71763, 72511, null], [72511, 74758, null], [74758, 75565, null]], "google_gemma-3-12b-it_is_public_document": [[0, 62, true], [62, 3211, null], [3211, 6462, null], [6462, 9472, null], [9472, 12964, null], [12964, 14675, null], [14675, 16951, null], [16951, 17639, null], [17639, 19074, null], [19074, 21838, null], [21838, 23492, null], [23492, 24560, null], [24560, 25025, null], [25025, 25445, null], [25445, 27035, null], [27035, 27559, null], [27559, 28800, null], [28800, 29635, null], [29635, 29952, null], [29952, 32436, null], [32436, 34432, null], [34432, 36470, null], [36470, 37397, null], [37397, 39032, null], [39032, 42273, null], [42273, 43999, null], [43999, 45314, null], [45314, 47218, null], [47218, 50590, null], [50590, 50661, null], [50661, 53708, null], [53708, 54199, null], [54199, 56072, null], [56072, 57355, null], [57355, 59791, null], [59791, 60317, null], [60317, 62728, null], [62728, 65178, null], [65178, 68138, null], [68138, 70113, null], [70113, 71763, null], [71763, 72511, null], [72511, 74758, null], [74758, 75565, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75565, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75565, null]], "pdf_page_numbers": [[0, 62, 1], [62, 3211, 2], [3211, 6462, 3], [6462, 9472, 4], [9472, 12964, 5], [12964, 14675, 6], [14675, 16951, 7], [16951, 17639, 8], [17639, 19074, 9], [19074, 21838, 10], [21838, 23492, 11], [23492, 24560, 12], [24560, 25025, 13], [25025, 25445, 14], [25445, 27035, 15], [27035, 27559, 16], [27559, 28800, 17], [28800, 29635, 18], [29635, 29952, 19], [29952, 32436, 20], [32436, 34432, 21], [34432, 36470, 22], [36470, 37397, 23], [37397, 39032, 24], [39032, 42273, 25], [42273, 43999, 26], [43999, 45314, 27], [45314, 47218, 28], [47218, 50590, 29], [50590, 50661, 30], [50661, 53708, 31], [53708, 54199, 32], [54199, 56072, 33], [56072, 57355, 34], [57355, 59791, 35], [59791, 60317, 36], [60317, 62728, 37], [62728, 65178, 38], [65178, 68138, 39], [68138, 70113, 40], [70113, 71763, 41], [71763, 72511, 42], [72511, 74758, 43], [74758, 75565, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75565, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
be9be3d32623bac638069494e4a649f052db6441
|
[REMOVED]
|
{"len_cl100k_base": 10962, "olmocr-version": "0.1.48", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 44430, "total-output-tokens": 12553, "length": "2e13", "weborganizer": {"__label__adult": 0.0003139972686767578, "__label__art_design": 0.00025725364685058594, "__label__crime_law": 0.00025916099548339844, "__label__education_jobs": 0.0005025863647460938, "__label__entertainment": 5.91278076171875e-05, "__label__fashion_beauty": 0.00011718273162841796, "__label__finance_business": 0.00017821788787841797, "__label__food_dining": 0.00030612945556640625, "__label__games": 0.00040984153747558594, "__label__hardware": 0.0006504058837890625, "__label__health": 0.0003361701965332031, "__label__history": 0.00017833709716796875, "__label__home_hobbies": 7.462501525878906e-05, "__label__industrial": 0.0003399848937988281, "__label__literature": 0.0002810955047607422, "__label__politics": 0.00020325183868408203, "__label__religion": 0.00044155120849609375, "__label__science_tech": 0.01113128662109375, "__label__social_life": 7.081031799316406e-05, "__label__software": 0.004291534423828125, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0002276897430419922, "__label__transportation": 0.0004394054412841797, "__label__travel": 0.00015926361083984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44642, 0.01395]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44642, 0.71933]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44642, 0.82237]], "google_gemma-3-12b-it_contains_pii": [[0, 2592, false], [2592, 5951, null], [5951, 8665, null], [8665, 10974, null], [10974, 13833, null], [13833, 16714, null], [16714, 18890, null], [18890, 21123, null], [21123, 23728, null], [23728, 26263, null], [26263, 28370, null], [28370, 30635, null], [30635, 32778, null], [32778, 35454, null], [35454, 39120, null], [39120, 40536, null], [40536, 41234, null], [41234, 44050, null], [44050, 44642, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2592, true], [2592, 5951, null], [5951, 8665, null], [8665, 10974, null], [10974, 13833, null], [13833, 16714, null], [16714, 18890, null], [18890, 21123, null], [21123, 23728, null], [23728, 26263, null], [26263, 28370, null], [28370, 30635, null], [30635, 32778, null], [32778, 35454, null], [35454, 39120, null], [39120, 40536, null], [40536, 41234, null], [41234, 44050, null], [44050, 44642, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44642, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44642, null]], "pdf_page_numbers": [[0, 2592, 1], [2592, 5951, 2], [5951, 8665, 3], [8665, 10974, 4], [10974, 13833, 5], [13833, 16714, 6], [16714, 18890, 7], [18890, 21123, 8], [21123, 23728, 9], [23728, 26263, 10], [26263, 28370, 11], [28370, 30635, 12], [30635, 32778, 13], [32778, 35454, 14], [35454, 39120, 15], [39120, 40536, 16], [40536, 41234, 17], [41234, 44050, 18], [44050, 44642, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44642, 0.03524]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
40aa5e63f449abbc1b3b421ba903c407e1e298b0
|
<table>
<thead>
<tr>
<th><strong>Title</strong></th>
<th>An Analysis of Model-Driven Web Engineering Methodologies</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Author(s)</strong></td>
<td>Lang, Michael</td>
</tr>
<tr>
<td><strong>Publication Date</strong></td>
<td>2012</td>
</tr>
<tr>
<td><strong>Link to publisher's version</strong></td>
<td><a href="http://www.ijicic.org/ijicic-11-11012.pdf">http://www.ijicic.org/ijicic-11-11012.pdf</a></td>
</tr>
<tr>
<td><strong>Item record</strong></td>
<td><a href="http://hdl.handle.net/10379/3414">http://hdl.handle.net/10379/3414</a></td>
</tr>
</tbody>
</table>
Downloaded 2018-12-27T08:48:53Z
Some rights reserved. For more information, please see the item record link above.
An Analysis of Model-Driven Web Engineering Methodologies
G. Aragón¹, M.J. Escalona¹, M. Lang², J.R. Hilera³
¹IWT2 Group. University of Sevilla. Spain
gustavo.aragon@iwt2.org; mjescalona@us.es
²NUI Galway. Galway, Ireland
michael.lang@nuigalway.ie
³University of Alcalá. Spain
jose.hilera@uah.es
Received November 2011; revised January 2012; accepted March 2012
ABSTRACT. In the late 1990’s there was substantial activity within the “Web engineering” research community and a multitude of new Web approaches were proposed. However, numerous studies have revealed major gaps in these approaches, including coverage and interoperability. In order to address these gaps, the Model-Driven Engineering (MDE) paradigm offers a new approach which has been demonstrated to achieve good results within applied research environments. This paper presents an analysis of a selection of Web development methodologies that are using the MDE paradigm in their development process and assesses whether MDE can provide an effective solution to address the aforementioned problems. This paper presents a critical review of previous studies of classical Web methodologies and makes a case for the potential of the MDWE paradigm as a means of addressing long-standing problems of Web development, for both research and enterprise. A selection of the main MDWE development approaches are analyzed and compared in accordance with criteria derived from the literature. The paper concludes that this new trend opens an interesting new way to develop Web systems within practical projects and argues that some classical gaps can be improved with MDWE.
Keywords: Model-Driven Web Engineering, Web Engineering, Web Methodologies.
1. Introduction. In the early 1990’s the research community started to work in a new area of software engineering oriented towards the special characteristics of the Web environment. Several approaches, such as HDM (Hypermedia Design Model) [1] and OOHDM (Object-Oriented Hypermedia Design Method) [2], offered new techniques, models and notations which initially dealt with “hypermedia” systems in general and later focused on Web-based systems in particular. The evolution of this line of research, Web engineering [3], is analyzed in several comparative studies and surveys [4-7]. Despite some
doubts being expressed about the necessity for specialized Web development methods as opposed to “traditional” or “conventional” methods [8]. Web engineering has since become an established branch of software engineering. However, these aforementioned studies and surveys also indicate a number of major gaps in the Web engineering body of knowledge, as later summarized in section 2.2. The motivation for this paper is therefore to analyze how the emerging paradigm of Model-Driven Web Engineering (MDWE) is being applied in order to address some of these gaps. The objectives of this paper are:
1. To review the literature on the emerging MDWE paradigm and discuss how it may potentially address long-standing problems in Web engineering.
2. To explore how the MDWE paradigm is being applied in existing and emerging Web development approaches.
Our research approach was guided in the first instance by the general principles laid down in Brereton et al. [9] and Kitchenham et al. [10-11] for conducting systematic literature reviews, and the discursive aspect of our work was further informed by surveying the opinions of a number of international experts in the area of MDWE.
Thus, our paper offers a vision completely focused on MDWE and its application, which, as it is presented in this paper, is truly significant and novel contribution.
This study is structured as follows. We start in section 2 by presenting an overview of a selection of the best-known Web development methods and discussing how Model-Driven Engineering (MDE) can potentially serve both, to rationalize and integrate these methods and also to solve several important gaps detected in Web engineering. In Section 3, we look specifically at a number of new and emerging Web development methodologies that are based on the MDWE paradigm. Section 4 then presents an analysis of these new MDWE methodologies, broken down into five different aspects: metamodel complexity, concepts, transformations, standards and compatibility, and tools and industry experiences.
The paper then finishes with a brief overview of other related work in section 5, and concludes by stating our views on current issues and problems in the field of MDWE, as well as outlining possible directions for future work.
2. Background to the Current State-of-Practice.
2.1 Web Engineering: An Overview of Development Methods. Over the past decade, several methods, approaches and techniques have been proposed in the academic and professional literature in order to deal with special aspects of Web development. Navigation, complex interfaces, difficult maintenance, security aspects and unknown remote users are amongst the critical challenges relevant to Web-based system development [12]. In an appendix to their study of the use of Web development methods in practice, Lang & Fitzgerald [13] present a comprehensive list of over fifty methods and approaches for Web/hypermedia systems development. A description and comparative analysis of the better known of these Web development approaches can be obtained in [6], from which is
derived the chronological “map of the territory” shown in Figure 1. Although some of these approaches, such as HDM (Hypermedia Design Method), are no longer in use, they nevertheless continue to be relevant to the Web development community because of the underlying concepts and principles upon which they are based. A number of the early approaches such as HDM [1] and RMM (Relationship Management Methodology) [14], were based on Entity Relationship Modeling, but all of the subsequent methodologies included in Figure 1 are object-oriented.
An important departure was the influential publication of OOHDM (Object-Oriented Hypermedia Design Method) [15]. This methodology is based on both HDM and the object-oriented paradigm and offers a systematic approach for the design and implementation of hypermedia systems. The valuable contribution of OOHDM to the field of Web engineering research is generally acknowledged and many of its ideas have since become widely accepted. OOHDM proposed dividing hypermedia design into several models, each of which represented a critical aspect of hypermedia systems: a conceptual model, a navigational model and an abstract interface model. This notion of separating the different aspects of hypermedia systems was novel at the time, but it is now followed by the Web engineering research community, enabling the complexity of a system to be broken down into separate layers. In OOHDM, a change in the navigational model affects only the navigational model, and the conceptual model needs no changes. Another important idea of OOHDM was to use class diagrams to model not only the conceptual model but also the navigational model via an extension of the basic class diagram. Additional aspects which could not be easily or fully explained using class diagrams, such as navigational context or abstract interface diagrams, could be modeled using a supplementary notation proposed by OOHDM.
Following OOHDM, more approaches were put forward, each of which offered new ideas, models, processes and techniques suited to the specific needs of interactive hypermedia systems and for the Web environment. Gradually, hypermedia systems evolved into fully-fledged Web-based information systems and these approaches were also modified to meet this new challenge; for example, HDM (Hypermedia Design Method) developed into HDM2, which later mutated into HDM2000 / W2000 and eventually led towards WebML.
It is clearly evident, even from just a cursory glance at Figure 1, that there are a substantial number of different methods in existence. This leads to the obvious questions: Why are there so many approaches? Is there no standard? Each approach focuses on some specific aspects and proposes suitable models, techniques and vocabularies. For instance, WSDM (Web Site Design Method) [16] is mainly focused on the design of Web sites from a user-centered perspective. It proposes a specific way to deal with different audience classes and roles and, in this aspect, is one of the most interesting approaches. However, for its navigational and conceptual models, its approaches are quite similar to OOHDM and EORM (Enhanced Object-Oriented Relationship Methodology) [17], although it uses a
different vocabulary and modeling notation.
![Diagram showing the evolution and coverage of the best-known Web development methodologies.]
FIGURE 1. The evolution and coverage of the best-known Web development methodologies
The overwhelming number of approaches and vocabularies is one of the most criticized aspects of the Web methodology community. Approaches are often defined without connection and are not compatible with one another. Regrettably, a survey of the literature on Web development methods would lead one to conclude that the “not invented here” syndrome is rife, with numerous authors independently devising their own modeling notations to represent very similar concepts in quite different ways. This has led to a fragmentation rather than a coming together of the cumulative body of knowledge, reminiscent of the early years of the object-oriented paradigm. More recently, some approaches such as WebRE [18] have been developed in order to try to solve this problem of incompatibility. WebRE is a methodology which deals with Web requirements based on W2000 [19], NDT (Navigational Development Technique) [20], UWE (UML Web Engineering) [21] and OOHDM.
Another important observation that can be noticed from Figure 1 is the varied coverage by methods of the development phases. In the Figure, each approach is located in the phase where its main focus lies. Thus, although the UWA Project [22] or WebML (Web Model Language) [23] give some consideration to requirements definition and implementation, they mainly emphasize the analysis and design phase. As can be seen, the majority of Web development methods are concentrated within the analysis and design phase, with noticeably less focus on the other phases of the life cycle.
One particular aspect of Web engineering that remains problematic is the lack of integrated toolsets to support development methods and approaches, a long-standing difficulty alluded to some years ago in [8]. Because of the frequent changes in Web systems and the imperative to release fully functional upgrades quickly and often, Web development methods must be highly agile. The use of
CASE tools that provide automated processes and enable rapid development/re-factoring is therefore necessary. In recent years, approaches such as UWE, which offers a tool named MagicUWE [24], and WebML, which is supported by the WebRatio tool [25], have been greatly welcomed. Nevertheless, for CASE tools to be interoperable and interchangeable between and across Web development methods, it is essential that there must be a mechanism to facilitate the transformation and consistent integration of semantic metamodels. In this regard, MDWE offers much promise because it potentially enables Web developers to mix-and-match method fragments taken from different approaches and combine them into a tailored hybrid which is customized to the needs of a particular development project. This paper offers a critical view about this possibility by analyzing if approaches can be easily integrated or extended with new approaches.
2.2 Model-Driven Web Engineering (MDWE). Several comparative studies and surveys of Web development methodologies have drawn attention to areas where further research is needed to address a number of clearly identified gaps and shortcomings. Within the Web engineering community, a number of research groups are working towards suitable resolutions to these gaps, which as already outlined in the previous section can be broadly classified within three areas:
• There are a wide variety of Web development methodologies, using a multiplicity of different notations, models and techniques; this lack of homogeneity and standardization is unnecessarily confusing and counter-productive because, although the underlying concepts and principles of many of these methodologies are quite similar, the fact that they use their own way of doing things hinders interoperability.
• As can be seen in Figure 1, no single Web development approach provides coverage for the whole life cycle, and this absence of a single “all-in-one” solution means that Web developers must mix-and-match aspects from different approaches, hence the need for methods that are compatible and interoperable.
• There still remains a lack of tool support for Web development methodologies, and conversely a lot of development tools lack methodical analysis/design components, so there is a bilateral disconnect between development tools and development methodologies, especially between analysis/design and implementation.
All of these issues can be addressed to some extent by adopting a model-driven development paradigm such as MDWE. This paper presents a novel contribution since it mainly focuses on analyzing approaches oriented to the model-driven paradigm. In MDWE, concepts have the greatest importance, independent of their representations. MDWE proposes the representation of concepts using metamodels which are platform-independent. The development process is supported by a set of transformations and relations among concepts that enables agile development and assures consistency between models. The model-driven paradigm is being used with excellent results in some areas of
software engineering and development. This suggests it could be also applied in Web engineering. For instance, in software products lines, MDE is offering a suitable way to assure traceability and products derivation [26, 27].
It is also offering promising results in the area of programming languages. Thus, some important frameworks for Web system development based on MVC (Model View Controller) provide an easy way to build Web software. Struts [28], Django [29], and Ruby on Rails [30] are relevant examples. They are open source Web applications frameworks which use the MVC architecture and combine simplicity with the possibility of developing Web applications by writing so many codes as possible and using a simple configuration. In fact, the base of these frameworks is also in MDE.
MDE has also been recently used in the test phase too. TDD (Testing-Driven Development) is a relatively new direction of research which is providing important results. Both, the definition of metamodels to represent test aspects and the use of transformations to derive test cases are also interesting research areas [31-33]. Additionally, every day is more prevalent in university teaching for its multiple applications and utilities [34][35].
It can therefore be seen that the use of MDE in different areas of software development has increased considerably in recent years, embracing programming, architectures, software products lines, testing, SOA (Software Oriented Architecture) development, aspect programming, etc. This paradigm is being adopted in all these areas with relevant results, and has also been applied to Web engineering.
MDWE (Model-driven Web Engineering) refers to the use of the model-driven paradigm in Web development methodologies [36]. It helps to derive models in a specific point of the development process by using the knowledge acquired in the previous stages together with the models previously developed.
Such is the allure of MDWE that a number of the “classic” Web development approaches shown in Figure 1 are now evolving to embrace this new paradigm, as explained in section 3. In order to analyze this evolutionary process, it is necessary to firstly clarify how MDWE can fill some of the aforementioned gaps in Web engineering listed at the beginning of this section.
Metamodels provide a solution for the multiplicity of vocabularies and approaches. A metamodel is an abstract representation of concepts. It does not focus on terminology or the way of expressing concepts. It only focuses on the concept itself. Thus, for instance, a storage requirement represents the necessity of the system to store information about content. In UWE, it is represented with a UML class and named Content. In NDT, it is called a Storage Information Requirement and it is described with a special pattern. Nevertheless, the concept is the same. Hence, a common metamodel can be defined and some transformations from the common metamodel to the specific approach can then be declared.
By the definition of common or standard metamodels, Web development
methodologies can become compatible and the differences in vocabulary together with the lack of connection among different approaches can be solved. A development team can use the most powerful idea of each approach and, through transformations, obtain advantages of other approaches.
As indicated in Figure 1, there is no single approach that covers the complete development life cycle in depth and each approach has its own particular strengths. Thus, a development team could be interested in applying the requirements approach of NDT with the aim of capturing the business knowledge, the analysis and design phases of UWE and the code generation of WebML. This can be possible if a suitable set of metamodels and transformations is defined. An example of this idea is illustrated in Figure 2. Starting with the requirements phase of NDT, after some transformations, it could be moved into the common model (a concrete instance of the metamodel in this project). After that, transformations could be applied to get UWE analysis and design and the process could be repeated to use the code generation of WebML. This hypothetical scenario could enable a developer to benefit from the advantages separately provided by each approach, and through the synergy achieved by combining different parts of different methods the problem of lack of full lifecycle coverage can be addressed.
FIGURE 2. Use of common metamodels to make approaches compatible
Obviously, the quality of both the metamodel and transformations is fundamental in obtaining suitable results. To define a common metamodel is a hard task, and it is necessary to achieve a high degree of abstraction to define concepts and find common concepts. As mentioned in [37], there are some important studies that deal with the use of metamodels to fuse or make compatible different approaches. If the use of tools is necessary in Web engineering, it is essential in MDWE. If Figure 2 is again analyzed, it is noticed that these ideas cannot be applied without tool support. Transformations must be carried out automatically and the development team should not have to apply them manually. Although tools to define metamodels and transformations are still in the early development stages, some important advances are being carried out. Thus, SmartQVT [38] and Moment [39] are two good examples. In particular, it is notable that these tools are methodology-independent because they are based on standards such as UML profiles [40] and QVT languages [41]. Therefore, if a metamodel of any Web
development approach is defined using standards, any of these new tools is suitable.
The absence of practical applications is the only area that cannot be directly solved with MDWE. Nevertheless, in the very few practical applications that have been published, the results are promising [20].
However, as it can be deduced from this introduction, this paper offers an overall review of the situation and analyzes how MDE can solve the classical problems detected in Web development in the last years.
2.3 Model-Driven Architecture (MDA). MDA [42] is the standard Model-Driven Architecture defined by the Object Management Group (OMG) in 2001. It is oriented towards outlining a common architecture in the MDE environment. In MDA, four levels are proposed:
- **CIM (Computer-Independent Model):** This level defines concepts that capture the logic of the system. For instance, the business and the requirements models are included in this level.
- **PIM (Platform-Independent Model):** This level groups concepts that define the software system without any reference to the specific development platform. For instance, analysis artifacts are included in this level.
- **PSM (Platform-Specific Model):** In this level, computer-executable models that depend on the specific development platform are defined, such as models for Java or .NET.
- **Code:** This is the highest level and includes the implementation of the system.

In MDA, some transformations can be defined among these levels. Thus, CIM-to-PIM, PIM-to-PSM or PSM-to-code transformations can be defined. Furthermore, transformations on the same level, for instance PIM-to-PIM, can be defined in MDA. In section 4 of this paper, MDA is used as a basic reference framework to compare and study a number of MDWE approaches. Most of these approaches define their metamodels and transformations based on the MDA
standard, although they each focus on different levels of the MDA standard.
3. Web Development Methods based on the MDE Paradigm. This section presents a number of Web development methods that are based on the model-driven paradigm, some of which are evolutions of classic approaches such as OOHDM and HDM. The main features of each approach are outlined, as well as a summary of advantages and disadvantages and references to where metamodels and transformations can be obtained.
3.1 OOHDMDA. OOHDM has been one of the most important methodologies in Web engineering. Originally proposed in 1995 [15], it presented important ideas such as separating the design of a Web system into three models: conceptual model, navigational model and abstract interface model. This idea was taken up by several later approaches. In its first incarnation OOHDM only covered design and implementation. However, it was later enriched with a specific technique, UID (User Interface Diagrams) [44], to deal with requirements.
OOHDMDA [45, 46] is a MDE approach based on OOHDM. Starting as a PIM model designed with OOHDM, a servlet-based PSM is generated. OOHDMDA provides a Web application design with a UML-based design tool using the conceptual and the navigational model of OOHDM. With this base, the approach starts with the XMI file generated from the tool. Both models are enriched with behavioral semantics that are obtained from behavioral model classes incorporated in the approach. With this PIM XMI file, the approach defines some servlet-based transformations in order to obtain a PSM-XMI file with specific servlet technology. In this way, the approach offers some PIM-to-PSM transformations starting with OOHDM and ending up with servlet technology.
Although the approach is based on MDE, there is no specific PIM metamodel for OOHDMDA. Of course, MDE only implies that a development approach uses models and transformations without necessarily requiring the existence of a metamodel; transformations are not always from metamodel to metamodel, they may also be from model to model. As an extension of OOHDM, OOHDMDA naturally uses the OOHDM metamodel. OOHDM concepts are defined as stereotypes in a UML-based design tool and its transformations are generated with Java.
In the PSM level, OOHDMDA includes two specific metamodels [45]: servlet-based PSM for dynamic navigation and another one for advanced navigation. However, OOHDMDA mainly highlights the PSM level and although there is a departure from standards in the definition of the PIM metamodel and transformation, the approach taken is reasonable practical and illustrative examples can be found in [45, 45]. Moreover, the use of tools in the OOHDMDA development approach offers a suitable environment for practical usage.
OOHDMDA is interesting because it shows how MDE can help to fuse separate approaches. In fact, OOHDMDA is an extension of OOHDM, which adds new concepts in the PSM level and uses the abstraction of metamodels for PSM.
generation. It defines new concerns to those already established by OOHDM and isolates the transformations of the implementation of the tool which supports the methodology.
3.2 WebML. As defined by its authors [23], WebML is a notation to specify the conceptual design of complex Web sites. Its development process starts with the conceptual modeling of the system, using a data model. In this phase, WebML does not define its own notation and instead proposes the use of standard modeling techniques such as Entity-Relation diagrams or UML class diagrams.
The process continues with the definition of a hypertext model. In this model, hypertexts that can be published on the Web are described. Each “hypertext” defines a view of the Web site. Hypertexts are described by means of two models: the composition model, which defines the pages and “content units” in the system, and the navigation model, which describes the navigation through these pages. The next step develops the presentation model, which defines the physical appearance of the Web pages. Finally, the personalization model underlines how the system has to be adapted to each user’s role.
One of the most interesting contributions of WebML is that it offers a CASE tool named WebRatio [25] which enables the proposed techniques to be applied systematically.
Although WebML generates code from PIM models, there is no formal model-driven definition of the approach in the form of a metamodel or a set of formal transformations. In fact, two alternative metamodels can be found for WebML in the literature. The first, which is henceforth referenced as WebML\(_1\) [47], is a MOF for the analysis and design WebML models. This metamodel is divided into four packages: CommonElements, DataView, HypertextView and PresentationView, one for each of the models that WebML considers. In each package, specific metaclasses, meta-associations and constraints represent each artifact of the methodology. Moreno et al. [47] use OCL (Object Constraint Language) in order to express these constraints. Transformations are not proposed in this approach as it mainly focuses on defining a metamodel for WebML.
The other metamodel for WebML, here called WebML\(_2\), is the result of a study by Schauerhuber et al. [48] that attempts to ease the application of MDE techniques into Web modeling languages. They present a semi-automatic approach that allows the generation of MOF-based metamodels from DTD (Document Type Definition). These metamodels are also divided into packages that follow the initial definition of analysis and design of WebML metamodels: Hypertext Organization, Access Control, Hypertext, Content Management and Content. Some OCL constraints are also incorporated in order to represent restrictions in metaclasses and associations. In this approach some transformations are defined in order to obtain WebML metaconcepts from DTD ideas. These transformations offer suitable reusability of the solutions to cope with some disadvantages detected in the use of DTD. Transformations are defined in an informal way with a corresponding matrix
included in a Metamodel Generator (MMG). These two metamodels are quite similar despite the different ways in which they group WebML concepts. Although it is not an aim of this paper, their comparative study could prove very interesting. Some formal work has already been carried out for the translation of WebML models into a formal MDA environment [31].
3.3 W2000. As mentioned in a previous section, the W2000 approach [49] evolved out of HDM [1]. However, W2000 and HDM differ in two basic regards. Firstly, HDM is essentially an extended E-R metamodel, not a methodological proposal, whereas W2000 proposes a life cycle to develop Web systems. Secondly, W2000 is based on the object-oriented paradigm. However, despite these differences, the fundamental concepts of HDM have been inherited by W2000 and adapted to the object-oriented paradigm.
The W2000 life cycle starts with a requirements analysis phase, mainly based on use cases. By using the knowledge acquired from this requirements phase, the process goes on to the hypermedia design phase where two models are developed: the conceptual and the navigational model. To this end, W2000 modifies and extends some UML models such as the class diagram and the state diagram. The last phase is the functional design phase where the sequence diagram is used to express the functionality of the system.
In a more recent work [19], W2000 is presented as a MOF metamodel. In this metamodel, only concepts related to the analysis metamodel are presented. The metamodel is structured into four packages which are related to each of the models defined by W2000 in the analysis phase: Information, Navigation, Presentation and Dynamic Behavior.
The abstract specification of the metamodel and the organization of metaclasses seem very relevant; however, this approach only covers the definition of metamodels and some constraints among concepts expressed with OCL. There are no transformations defined in the approach. Thus W2000 is just at the first stage of embracing the MDE paradigm.
3.4 UWE. UWE (UML Web Engineering) is one of the most cited techniques in Web engineering and one of the first techniques that evolved into the MDE paradigm. UWE is a Web approach that covers the complete life cycle although it is mainly focused on the analysis and design phase. One of its most important advantages is that all its models are formal extensions of UML. UWE uses a graphical notation that is entirely based on UML. It enables the use of UML-based tools and reduces the learning time of Web developers who are already familiar with UML. Tool support for UWE is available in the form of MagicUWE, which offers a plug-in for MagicDraw with any artifact of UWE [50].
UWE follows the idea of model separation introduced by OOHDM, although it proposes the inclusion of some new characteristics, such as Adaptations and Presentation. Its MDE approach is perhaps one of the most complete since it offers a metamodel for each model of UWE: Requirements, Content, Navigation,
Presentation and Process, together with the set of transformations to derive some models from others [51]. The content model is based on class diagrams of UML whereas the requirements model is based on WebRE [18]. Additionally, UWE has defined profiles in order to work with these metamodels. This profile definition is an efficient way to incorporate UWE metamodels in any UML-based design tool that included this possibility.
In regard to transformations, UWE defines them by using QVT as a standard language [52]. There have been some interesting experiences with the implementation of a part of its transformations by using ATL (from PIM-to-Code transformations). One such implementation is UWE4JSF, which consists of a plug-in tool defined with EMF that allows the generation of Web applications for the JSF (Java Server Faces) platform [53].
3.5 NDT. NDT (Navigational Development Techniques) [20] is a MDWE methodological approach mainly focused on requirements and analysis. NDT defines a set of CIM and PIM models and the set of transformations, using QVT, to derive PIM from CIM.
As occurs in other approaches, these metamodels are defined by using class diagrams. The requirements metamodel of NDT is an extension of WebRE that includes new concepts based on the WebRE approach. It also includes two metamodels, the content and the navigational, for the PIM level. The former is the UML metamodel for class diagrams and the latter is the UWE metamodel.
One of the most important advantages of this methodology is its tool support. A set of tools, called NDT-Suite, made up of four tools, supports the MDE development process of NDT (this toolset can be obtained at http://www.iwt2.org). Each metamodel of NDT has a specific profile that is implemented in Enterprise Architect [54]. The NDT methodology has adapted the interface of this tool with a set of tool boxes with direct access to each artifact of the methodology. This environment is called NDT-Profile. In addition, NDT-Suite includes six other tools:
1. NDT-Driver: A tool to execute transformations of NDT. NDT-Driver is a free Java-based tool that implements QVT Transformations of NDT and enables analysis models to be automatically obtained from the requirements models. Although transformations of NDT are completely defined using QVT, they are implemented in NDT-Driver with Java, which is very suitable for researchers working with companies on industry-based projects.
2. NDT-Quality: A tool that checks the quality of a project developed with NDT-Profile. It produces an objective evaluation of a project and assesses whether the methodology and MDE paradigm is used correctly. For this aim, NDT-Quality includes a test rule file that checks the use of QVT transformations in a NDT project.
3. NDT-Report: A tool that prepares formal documents that are validated by final users and clients. For instance, it provides the automatic generation of a Requirements Document according to the format determined by clients.
4. NDT-Prototypes: A tool that generates valuable prototypes from the NDT requirements. Because of the high level of tool support in NDT, with transformations capable of being executed automatically and assistance provided for all stages of the development life cycle, the NDT approach has been used in practice on several real projects [55].
5. NDT-Glossary implements an automated procedure that generates the first instance of the glossary of terms of a project developed by means of NDT-Profile tool.
6. NDT-Checker is the only tool in NDT-Suite that it is not based on the MDE paradigm. This tool includes a set of sheets, different for each product of NDT. These sheets give a set of check lists that should be reviewed manually with users in requirements reviews.
3.6 OOWS. OOWS [56] is a Web methodology which mainly focuses on the analysis phase. It is a Web extension for a previous methodology, OO-Method [57], which is based on the object-oriented paradigm and includes three models: a Structural Model, a Dynamic Model and a Functional Model. OOWS includes another two models specific to Web development: a Navigational Model and a Presentation Model.
OOWS is based on model-driven development and a recent paper [58] presents an approach for the transformation of a Web model into a set of prototypes. Firstly, this approach uses task metaphors to define requirements and these tasks are translated into an AGG graph. Using graph transformations, analysis models are then obtained. Graph grammars and graph transformations are a very mature approach for the generation, manipulation, recognition and evaluation of graphs [59], and most visual languages can be interpreted as a type of graph (directed, labeled, etc.). Thus graph transformations are a natural and intuitive way for transforming models. In contrast with other model transformation approaches, graph transformations are defined visually and are provided with a set of mature tools to define, execute and test transformations.
The OOWS approach is supported by a tool called OOWS Suite, which is a formal extension of a commercial tool named OlivaNova which supports the complete lifecycle of OO-Method. Valverde et al [60] provide a detailed description of this tool (see http://www.care-t.com/products/).
The metamodel of OOWS is based on a MOF metamodel which is easily understood. However, its transformations are not based on OMG norms, so it is not fully compatible with other similar approaches.
4. A Critical Analysis of MDWE Methodologies. In this section, the MDWE approaches outlined in Section 3 are critically analyzed and, where it is possible and appropriate to do so, they are compared. Because the degree of definition of each metamodel or transformation is not the same in each of these MDWE approaches, in some cases there is not enough information available to compare
them with the same criteria.
Before presenting the findings of our analysis, it should firstly be explained where each approach is located within the MDA framework. Table 1 represents each level of the MDA and an ‘X’ indicates if the MDWE approach works in this level; that is, if the approach defines metamodels and transformation oriented to the development of model in this abstract level of MDA. As previously seen in Figure 1, most of the “classic” Web development approaches were focused on analysis and design. Similarly, here again in Table 1 it is seen that most of the MDWE approaches are focused on the PIM level, which is equivalent to analysis and design within the MDA environment.
**Table 1. Web development approaches located within the MDA environment**
<table>
<thead>
<tr>
<th>MDA levels</th>
<th>CIM</th>
<th>PIM</th>
<th>PSM</th>
<th>Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>OOHMDA</td>
<td>X</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>WebML</td>
<td></td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>WebML2</td>
<td>X</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>W2000</td>
<td></td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>UWE</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>NDT</td>
<td></td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>OOWS</td>
<td>X</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Notably, none of the approaches that we compare here covers the whole MDA. In theory, as is indicated in Figure 2, the use of common metamodels and transformations could facilitate a situation where developers choose to use models from a phase of one particular MDWE approach and transform them into models of other MDWE approaches to proceed on to the next phase of the development life cycle. Obviously, for such integration to work in practice, the authors of MDWE approaches must work together to define transformations so that approaches can be adapted for fusion. At present, interoperability of MDWE approaches is for the most part not easy if indeed feasible, but an example of how different approaches can be combined is provided in the work of Moreno et al. [36] where a common metamodel is defined to work with OOH, UWE and WebML. It has therefore been demonstrated that it is possible to overlap different MDWE approaches, thereby enabling Web developers to mix-and-match different approaches so that they can avail of the separate advantages of each approach as well as the combined benefit of integrating approaches which together support all levels of the MDA framework.
**4.1 Metamodel Complexity.** The MDWE methodologies that were selected for analysis in this paper are considerably different as regards the aspects covered by their metamodels. As such, it is not possible to directly compare the metamodels of each methodology because of the variations in scope. We explored the possibility of comparing corresponding subsets of the methodologies, but this is not feasible because of the differences in the ways the metamodels are described. The purpose of this section of our analysis is therefore to provide an indication of the cognitive complexity of the metamodels of the methodologies. The rationale for looking at cognitive complexity is because previous research has shown this to be a relevant factor affecting the adoption of Web development methodologies in practice [4, 13].
Cognitive complexity is a subjective notion, but it is related to structural complexity, which can be assessed using appropriate metrics [61]. To guide our analysis, a review of the literature on metamodel metrics was conducted following the general principles laid down by Kitchenham et al [10, 11]. In the methodologies analyzed, metamodels are introduced as class diagrams. For this reason, normal class diagram metrics are appropriate. Because we are interested only in the static elements of the metamodel, only class diagram metrics relating to structure were selected and others regarding behavior and functionality were omitted from our analysis. After analyzing several metric approaches [62-65], we chose to include a number of classic class diagram metrics as the basis of our analysis: number of classes; maximum number of attributes per class; maximum inheritance depth; average number of child classes inherited; and the number of new concepts presented. A detailed definition of these metrics and their general meaning in object-oriented models can be found in Pressman [65].
We must qualify our analysis by acknowledging that such metrics do not necessarily give a true and fair view of the degree of complexity of a methodology, because richer methodologies may be seen to be of greater size simply because they have broader scope and therefore have more extensive metamodels. It would be better to provide some indication of the “accidental” complexity of a metamodel (i.e. the amount of unnecessary complexity) but because this is a very difficult thing to measure we instead chose to use the aforementioned metrics of structural complexity as a proxy for overall cognitive complexity. Only those metamodels specific to each approach were considered:
- In OOHDMDA only the servlet-based PSM for dynamic navigation and for advanced navigation are included. Although OOHDMDA uses the OODHM metamodel, it is not proposed by the approach itself.
- In WebML1, all four packages were considered: CommonElements, DataView, HypertextView and PresentationView.
- In WebML2, five packages were considered: Hypertext Organization, Access Control, Hypertext, Content Management, Content.
- In W2000, its four packages were included in the survey: Information, Navigation, Presentation and Dynamic Behavior.
- For UWE, only four packages are considered: Requirements, Navigation, Presentation and Process. The Content package is not included because it is based on the UML metaclass for class diagrams.
- In NDT, only the requirements metamodel is included. This approach also uses the UML content metamodel and the UWE navigation metamodel.
- Finally, for OOWS, only Navigational and Presentation metamodels are considered since it inherits the rest of the metamodels from OO-Method.
The results of our analysis are presented in Table 2. We wish to emphasize that the purpose of Table 2 is not to directly compare methodologies, because it is not possible to do so on this basis. Nor should it be inferred that methodologies of
greater dimension are of lesser usefulness, because the various methodologies have different scope. Our intention here is to provide some indication of the overall size of the metamodels contained within each of the methodologies, which we interpret as a proxy for overall cognitive complexity. Obviously, if an approach deals with a higher number of metaclasses or concepts than another, it does not mean that it is worse. However, authors should be conscious of the importance of recommending metamodels that are easily understandable, and Metamodel complexity is essential in this regard.
Table 2. Metamodel Metrics for Each MDWE Approach
<table>
<thead>
<tr>
<th></th>
<th>OOHDMDA</th>
<th>WebML1</th>
<th>WebML2</th>
<th>W2000</th>
<th>UWE</th>
<th>NDT</th>
<th>OOWS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of classes</td>
<td></td>
<td>14</td>
<td>51</td>
<td>53</td>
<td>21</td>
<td>38</td>
<td>10</td>
</tr>
<tr>
<td>Number of new concepts presented</td>
<td>13</td>
<td>53</td>
<td>53</td>
<td>21</td>
<td>38</td>
<td>12</td>
<td>5</td>
</tr>
<tr>
<td>Maximum number of attributes per class</td>
<td>6</td>
<td>3</td>
<td>3</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>5</td>
</tr>
<tr>
<td>Average number of methods</td>
<td>1.5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Maximum inheritance depth</td>
<td>2</td>
<td>2.4</td>
<td>2.3</td>
<td>1.75</td>
<td>2.5</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
The number of classes defines the number of classes specific to a metamodel. It does not include classes imported from other packages. A high number of classes could reduce the readability of the metamodel. The number of new concepts measures concepts introduced by the approach in its metamodels. The number of concepts is closely related with the number of classes. In fact, concepts are normally presented as classes in metamodels. In some approaches, associations introduce new concepts. Thus, the complexity of a MDWE metamodel can be measured with the number of classes. If a metamodel has a high number of metaclasses, heritage, or associations, it will be difficult to understand. The authors of MDWE approaches should consider these two metrics because the readability of metamodels is affected by size [61]. As can be seen in Table 2, there are differences between WebML1 and WebML2. They offer different metamodels for the same approach, but the number of concepts is the same. In fact, both the number of concepts and the number of classes must be quite similar because each new concept must be defined in the metamodel either as a class or as a special association. If a new concept is not included in the metamodel because it is represented as a UML class, it is not considered a new concept but a UML concept. The maximum number of attributes per class is another measure of metamodel complexity. In Table 2, only those attributes presented in the metamodel diagram are included. Approaches that define transformations in a formal way, such as QVT and XML, have other attributes that, for the sake of simplicity, are not considered here.
The next two metrics are oriented towards class heritage. Heritage is one of the most relevant artifacts of class diagrams and this metric is applicable to the
metamodels of the approaches included in Table 2. In classic class metrics, the maximum inheritance depth must not surpass three levels. This high number of levels makes it too complicated to understand class models. Similarity in metamodels, a large inheritance depth causes complexity in the metamodel and it is therefore difficult to follow concept definition. The average number of child classes inherited is considered another important metric. Classic metric approaches propose that the number of child classes remains small since it is also a measure of complexity.
Each author, even with the same approach, as can be observed with WebML, expresses concepts and their relations according to experience. The fact that UWE has fewer concepts than WebML does not mean that the metamodel expresses fewer semantics. In fact, UWE, NDT and OOHDMDA extend and use a high number of concepts from UML, but these are not included in Table 2.
Some interesting conclusions about metamodels in general can be drawn from Table 2. Firstly, metamodels seldom include methods since they normally express concepts and their relations and do not include information about functionality. Only OOHDMDA includes some methods since this approach is close to model generation and these methods express the possibility of this generation. Secondly, the use of heritage is present in all of the MDWE approaches that we compare in this study. Heritage is an important artifact to express relations and extensions of concepts. The number of child classes or the maximum inheritance depth changes in each approach, although it never reaches a high number. In fact, metamodels express concepts and the relations and constraints among them. Consequently, the authors should reduce the complexity of their approaches. The main aim of a metamodel is to present the approach as simply as possible. As can be concluded from Table 2, this tendency is followed by the approaches under study.
Finally, although it was not included in the table as a metric, an important advantage for MDWE is a profile definition. The standard definition of a profile for metamodels, based on UML, is a powerful artifact for each MDWE methodology. In a profile, each concept in the metamodel is defined as a formal extension of a UML class, thereby two important advantages are assumed: the first one is related to Figure 2. If a methodology, such as UWE or WebML, defines a concept as an extension of UML activity for instance, it is easier to find a connection between these two approaches and to find similar concepts in the two metamodels. Furthermore, as for NDT with NDT-Suite or UWE with MagicUWE, the use of a profile facilitates the use of UML-based tools. With a simple extension of UML, any commercial UML-based tool could be a suitable tool support for the methodology. Only UWE, NDT and WebML offer specific profiles for metamodels, although other approaches such as OOHDMDA use them in their approach, and as aforementioned OOHDMDA uses a profile for OOHDM.
4.2 **Metamodel Concepts.** Concepts are the basic aspects that are handled in MDWE. Each of the MDWE approaches analyses in this study defines its own concepts. In some cases, these approaches coincide by using the same name for the same concept. However, in other cases, the same name is used for different concepts or various names are used for the same concept.
The lack of a standard terminology in Web engineering is a well-known and lamented problem [4-7], and indeed it caused some difficulties when conducting this analytical study of MDWE approaches.
Nevertheless, there are a number of concepts which commonly appear in most Web engineering approaches. Based on a review of the literature, including previous comparative analyses of MDWE approaches [5], Table 3 presents an overview of the scope of the MDWE approaches analyzed in this study. An ‘X’ indicates that the approach defines concepts that are included in its metamodel, and a shaded cell indicates that the particular approach does not cover the MDA level. It should be noted that, because each approach uses its own terminology, the row labels in Table 3 may therefore not be the same as the actual name given by each approach to the corresponding concept in its metamodel; however, the essential meaning of the concept is the same.
In the upper section of the table, models treated in the requirements phase (CIM Level) are listed, based on a classification obtained from [5].
In the lower section of the table (the PIM level), a classification of models mainly based on UWE notation is presented. Neither NDT nor OOWS cover this level directly by themselves, although NDT uses some UWE metamodels and OOWS uses the OO-Method to deal with these aspects. For that reason, NDT and OOWS are shown in the table as not covering the PIM level. Similarly, although the WebML methodology deals with adaptation, the WebML₁ and WebML₂ metamodels do not consider this aspect, as indicated in Table 3.
<table>
<thead>
<tr>
<th>Table 3. Models covered by each MDWE approach</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>CIM Level</strong></td>
</tr>
<tr>
<td>Data requirements model</td>
</tr>
<tr>
<td>User interface requirements model</td>
</tr>
<tr>
<td>Navigational requirements model</td>
</tr>
<tr>
<td>Adaptive requirements model</td>
</tr>
<tr>
<td>Transactional requirements model</td>
</tr>
<tr>
<td>Non-Functional requirements model</td>
</tr>
</tbody>
</table>
| **PIM Level** |
| Content model | X | X | X | X | X | | |
| Navigational model | X | X | X | X | X | | |
| Presentation model | X | X | X | X | | | |
| Adaptive model | | | | | | | |
| Process model | | | | | | | X |
As was previously mentioned in Section 2, in the overview of “classical” Web engineering approaches, the conclusion can be drawn that the main characteristics studied in MDWE by these approaches are again:
- Static aspects, represented by a content model or content requirements
- Navigational aspects, represented by navigational models or navigational requirements
- Presentation aspects, represented by abstract interfaces models, users’ adaptation in requirements, etc.
In this sense, MDWE follows the same line as that of “classical” Web engineering. The use of the new paradigm only offers a new way of carrying out development. As is concluded in this study, this new paradigm offers solutions for various problems such as compatibility between approaches, the use of UML-based tools, etc., although no new concepts are introduced which are different to those of classical Web engineering ones.
4.3 Transformations. If metamodels are the base of MDE, transformations are its most important advantage. Transformations make model derivation easier and help maintain traceability among these models. Transformations look for connections between a previous model to another one and enables the translation of knowledge from one phase to the next more readily. Thus, for instance, if in the requirements phase the development team detects the necessity of storage data about users in the system, the CIM-to-PIM transformation will create a class in the analysis model to store users’ data and the PIM-to-PSM transformation will define a persistent Java class to store this information.
Special metrics to measure the quality of a transformation were not found in the literature. The reason for this gap in the literature can be attributed to a number of factors: transformations are a new way of building software, the standards for definition (e.g. QVT) have only recently been defined, and each MDWE approach uses either a different way to express transformations or different transformations languages. Furthermore, each of the MDWE approaches that we analyzed has a different degree of development in its transformations. Notwithstanding these difficulties in forming meaningful comparisons, Table 4 presents an outline of the set of transformations dealt with by each of the MDWE approaches under consideration in this study. An ‘X’ indicates that the approach supports the specified transformation.
It is important to point out that some of these approaches have as yet only defined a metamodel, but do not incorporate transformations. Furthermore, the definition of transformations is still a relatively unexplored area in the model-driven paradigm. OMG has defined a standard, known as QVT, which is still in its early stage and, although there are some tools that support this language, such as SmartQVT or Moment, insufficient development means that research groups cannot provide the transformations yet.
An important approach in this area is OOWS, which uses a set of graph
transformations to translate from a CIM model into a PIM model. The use of graph transformations and AGG graphs addresses problems of incompatibility. Since this is the only approach under study that works with this technology, it is difficult to compare its results with the others. However, AGG graphs and their transformations represent a robust and well-studied environment. There are suitable tools that support these transformations, therefore OOWS has implemented its translations and offers a suitable tool environment for its use.
On the other hand, NDT has defined its transformations in a theoretical way with QVT. Nevertheless, it has translated these transformations into Java and offers the derivation of models in its tool called NDT-Driver. This is a suitable solution for use in practice, but it is not the principal aim of the MDE paradigm. The ideal environment is that the MDE community could use a general and standard tool that permits the metamodel definition to use a standard language, for instance, a class metamodel. The tool should also offer a suitable environment for the definition of transformations and standard such as QVT, to define these transformations. As explained in section 4.5, this is currently one of the most important areas lacking research in the MDE environment.
### 4.4 Standards and compatibility
One of the most important advantages of the MDWE paradigm is the possibility of making various approaches compatible. MDWE is focused on concepts and the way to deal with and represent these concepts is unimportant. However, if a metamodel or a concept is defined freely without reference to a common standard, the multiplicity of concepts can surface again as a problem, just as it originally did in the Web engineering approaches of the 1990’s. If a metamodel or some transformations were defined using a common language, the connection among approaches could be easily facilitated.
To this end, the use of UML profiles offers very interesting results. A UML profile is an extension mechanism offered by UML to extend the basic concepts of a MDWE approach. Thus, if an approach defines its own metamodel using a class diagram and later defines a UML profile, then it offers a standard definition of its concepts that can be understood by other researchers and groups. As examples of UML profiles, NDT provides the concept of Storage requirements which is an extension of the UML class, while UWE defines the Content concept, which is also an extension of the UML class. If both are analyzed in each approach, we can
<table>
<thead>
<tr>
<th>Table 4. MDA Transformations dealt with by MDWE approaches</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
</tr>
<tr>
<td>Transformation of CIM to PIM X</td>
</tr>
<tr>
<td>Transformation of PIM to PSM</td>
</tr>
<tr>
<td>Transformation of PSM to Code X</td>
</tr>
<tr>
<td>Language used for transformations Java XSLT QVT ATL QVT Graph trans.</td>
</tr>
</tbody>
</table>
---
**Table 4. MDA Transformations dealt with by MDWE approaches**
On the other hand, NDT has defined its transformations in a theoretical way with QVT. Nevertheless, it has translated these transformations into Java and offers the derivation of models in its tool called NDT-Driver. This is a suitable solution for use in practice, but it is not the principal aim of the MDE paradigm. The ideal environment is that the MDE community could use a general and standard tool that permits the metamodel definition to use a standard language, for instance, a class metamodel. The tool should also offer a suitable environment for the definition of transformations and standard such as QVT, to define these transformations. As explained in section 4.5, this is currently one of the most important areas lacking research in the MDE environment.
**4.4 Standards and compatibility.** One of the most important advantages of the MDWE paradigm is the possibility of making various approaches compatible. MDWE is focused on concepts and the way to deal with and represent these concepts is unimportant. However, if a metamodel or a concept is defined freely without reference to a common standard, the multiplicity of concepts can surface again as a problem, just as it originally did in the Web engineering approaches of the 1990’s. If a metamodel or some transformations were defined using a common language, the connection among approaches could be easily facilitated.
To this end, the use of UML profiles offers very interesting results. A UML profile is an extension mechanism offered by UML to extend the basic concepts of a MDWE approach. Thus, if an approach defines its own metamodel using a class diagram and later defines a UML profile, then it offers a standard definition of its concepts that can be understood by other researchers and groups. As examples of UML profiles, NDT provides the concept of Storage requirements which is an extension of the UML class, while UWE defines the Content concept, which is also an extension of the UML class. If both are analyzed in each approach, we can
conclude that they represent the same idea, although they are named differently in each methodology. Extensions which are based on the same UML concept gives rise to opportunities for forward compatibility, thereby representing an important step towards a common metamodel for Web modeling [48].
4.5 Tools and industry experiences. Despite the fact that Web engineering is a very active area within the research community and some very good results have been achieved, the application of many of its ideas, models and techniques within industry has not yet been realized. As one example of the low rate of knowledge transfer into practice, a recent study conducted in Spain, which interviewed more than 50 project managers and 70 analysts from a sample of 30 software companies representing local, national and international organisations, found that just 25% of medium- to large-sized companies (i.e. more than 50 employees) knew anything about “Web engineering”, while only 10% of small companies had heard about it. Overall, only 1% of companies had applied Web engineering in their projects [66]. Similar experiences were found in comparable studies conducted in Ireland [4, 13]. These results indicate that although Web engineering methods could be very useful, few of them are currently in use.
In MDWE the situation has not changed, there so far being very little application of academic research experiences to real projects. Of the MDWE approaches described in this paper, only a few published accounts of “real world” practical applications are known to exist. In [20], NDT shows how its initial definition evolved because of feedback from practice, and these experiences show the good results that can be obtained. Two relevant advantages offered by the MDWE paradigm are the reduction of the development time by using transformations and the concordance among models in different phases. WebML, and principally its tool, WebRatio, have also been applied with successful results within real enterprises.
The need for translational research to move research findings from academic research laboratories into the world of practice is widely accepted as a firm prerogative within applied disciplines such as Web engineering. However, practical application is not possible without a set of suitable tools that facilitate the application of models, techniques, transformations and the maintenance of model coherence. Despite this fact, an important advantage attaches to the approaches examined in this paper. Through metamodels, standards, profiles and basic tools under the MDE paradigm, the evolution of tools is better in MDWE than in Web engineering. The lack of suitable tools has always been a disadvantage of the “classic” Web engineering methods. In this survey of MDWE approaches, all the studied approaches offer suitable tool environments for its application.
The use of profiles to facilitate tool support is potentially very interesting. With profile definition, UML-based tools can provide a suitable solution for any MDWE and it reduces the cost of learning curve because they are quite friendly for development teams, which make easier the application of these approaches in enterprise environment [67]. In fact, if the definition of MDE standard languages evolves in the
next few years, then the use of general UML-based tools for MDWE should become a reality. This idea is being followed by OOHDM, NDT and UWE and they are offering suitable and adaptable results. However, in MDE in general, there is an important gap in tools that offer the possibility of defining a metamodel and transformations which can be executed into concrete models in real projects. In addition, the research community needs tools to support the MDE process successfully. To implement a MDWE approach, it is necessary in the first instance to have a tool which can represent metamodels and transformations written, for example, in QVT. In this sense, the EMF or ATL environments offer promising results, although MDE also needs a defined concrete syntax to represent its metamodels. For instance, in the case of NDT, metamodels are not used by development teams in practice; they use a set of tools, defined in NDT-Suite, that represent each artifact of the approach as a UML artifact extension. As yet, UML-based tools do not offer the possibility of writing transformations in a standard language. One solution that researchers are proposing as a resolution of this issue consists in writing transformations in a standard language and later implementing them with programming languages. For instance, WebRatio uses XML or NDT uses Struts. However, this suggestion does not seem suitable enough because, in fact, a change in the transformations implies a manual change in the code for executing these transformations. As yet, the existing limitation of tools means that no other possibilities are available. Some new developments, like Moment or SmartQVT, or the inclusion of MDA transformation languages in UML-based tools, like the case of Enterprise Architect, offer promising solutions.
5. Related Work. Although the analysis in the previous section focused on the main MDWE methodologies, there is also some other interesting work going on within this research area. In comparative studies on Web approaches, a general conclusion is that similar concepts are used or represented with a different number of models, techniques or artifacts. Thus, for instance, navigational classes are presented with different elements in UWE, OOHDM, NDT and W2000. Escalona and Koch [18] show how a metamodel can represent a concept independently from its representation or notation; only concepts are important. A metamodel for Web requirements called WebRE, which represents requirements models of W2000, NDT, OOHDM and UWE, is presented. In [21], their work goes on by using QVT to obtain analysis models from this metamodel. These papers are interesting since they are completely based on UML and on QVT, standards defined by OMG, although the work can be considered to be excessively theoretical. This tendency to use metamodels and transformations to make different approaches compatible is applied in a recent work under the name of MDWEnet [37] which is an initiative carried out by a representative group of MDWE researchers in an effort to find a common approach which allows various approaches to be represented and handled. Fernández and Mozón [68] present the possibilities of working with metamodels,
and tools, and show how a requirements metamodel can easily be defined in IRqA (Integral Requisite Analyzer), which is a commercial tool that helps in the definition of metamodels for requirements [69]. In this way, this paper reveals the power of the tools supporting metamodels as they are suitable for any approach defined by using metamodels. This work is very practical in fact, although it is not an approach for the Web. Metamodels do not offer specific artifacts to deal with the Web environment since it only offers an approach for classic requirements treatment [70].
In [71], Meliá and Gómez analyze an approach called WebSA (Web Software Architecture) which provides the designer with a set of architectural and transformation models used to specify a Web application. Although these models only work in the design phase, this approach is very relevant since MDA and QVT are applied in a very exhaustive way.
To conclude, the use of metamodels and MDE are areas of software engineering that are becoming widely accepted as a solution for classic problems in Web engineering.
6. Conclusions and Future Work. This paper presents an overview of how classic Web engineering methodologies have evolved to embrace the model-driven paradigm. A brief review of some of the most relevant Web approaches working on the model-driven paradigm was given, and the findings of an analysis of model-driven Web development methodologies were set out.
Although Web engineering is now an established branch of software engineering, this paper argues that there are a number of long-standing problems to be covered that could potentially be addressed by using MDWE. One of these gaps is the multiplicity of methodologies which, given the lack of standards, means that Web developers cannot interoperably mix-and-match the products of different phases of different methodologies. As can be seen in Figure 1, there are many Web development methodologies, all the more evident from the compendium of over fifty methods and approaches compiled by Lang & Fitzgerald [13]. Previous studies conclude that many of these methodologies have their own particular strong points above the others [4-7]. Furthermore other more recent approaches for Web application development, as proposals derived from other engineering areas (for example, the idea from Dynamic Interactive Systems (DIS), of modeling and synthesizing fully functional Web-based interactive applications using the incremental, component wise, correct-by-construction approach named Equivalent Transformation (ET) [72]).
We are not arguing that there should be a standard universal Web development approach, but it is important that whatever method or methods chosen by a development team for any given project may be capable of integration. The model-driven paradigm can offer a suitable solution for this problem. As shown in Figure 2, the use of MDWE can help to fuse approaches, thereby benefiting from the respective advantages of each individual method. Work referred to in section 5,
such as common metamodels and WebRE, is offering interesting results along these lines.
Another problem in Web engineering is the lack of tools that offer suitable support for the development environment. As can be deduced from the comparative analysis of OOHDMDA, UWE, OOWS and NDT presented in this paper, MDWE shows a good solution to this problem by means of using metamodels and profiles. Hence, it is not necessary to define a specific tool for each approach. If a metamodels and a suitable profile is defined, UML-based tools can be used in the approach. Thus, with only the profile definition for instance, for NDT, then Enterprise Architect, IBM Rational Rose, ArgoUML and StarUML can be used for the application of the methodology. The use of suitable tools for the application of these approaches is one of the most important issues in the enterprise application of this kind of solutions. In fact, MDWE approaches offer more empirical experiences that Web engineering, although there is too much work to do in this line.
However, although results in this area are encouraging, further work is necessary. Profile and metamodel definitions are well-supported by these tools, but transformations are not. Some tools, such as Enterprise Architect, define their own MDA language. They must be based on standards in order to offer flexibility. Thus, if a methodology defines a suitable profile and transformations using these standards, any tool could be used to support the development with this methodology. Although there are some solutions, such as the use of ATL, the implementation of transformations with Java or the use of Graph transformations, UML-based tools must evolve along this line.
The availability of tools and the possibility of fusing several approaches are placing MDWE closer to the being used in industry-strength “real world” projects with suitable results. MDWE offers important advantages for companies. For instance, transformations and systematic model generation can reduce the development time, especially if a tool is used. With MDWE, the knowledge reached in one phase of the life cycle is carried over to the next phase by means of using transformations. Furthermore, with metamodel constraints, traceability can be checked systematically, thereby solving major errors, inconsistencies and mistakes in the first phases of the life cycle.
Despite the fact that MDWE offers suitable results, there are still some important areas that must be considered in this survey. The first one is the “feedback” in the life cycle. For instance, if the requirements phase is completed and the analysis is generated from requirements results, then, how can future changes in requirements be incorporated into the analysis? The MDWE approaches that we analyzed are working in this direction. Thus, NDT, for instance, includes a specific method of generation in NDT-Driver to solve this problem. However, in general, it is future work for approaches included in this survey.
Another line of future work is research oriented towards practical application.
Although MDWE offers suitable aspects to be applied within industry, there have been very few practical applications up to date. Research groups must work together towards a common aim of extending MDWE research from academic laboratories into practical settings, trying to apply in the future this approach for designing Web sites and to complement it with other quality website design techniques as the one described in [73]. Importantly, this should include guidance on the practical limitations of applying MDWE methodologies, such as experience reports on which methodologies work best in different circumstances.
As a final conclusion, MDE is a relatively new paradigm suitable for Web engineering and offers a productive research line for the Web community. However, it is still in its development stages and needs further research to offer more attractive solutions for its application in practice.
Acknowledgements
This research has been supported by the Tempros project (TIN2010-20057-C03-02) and the National Network Quality Assurance in Practise. CaSA (TIN2010-12312-E) of the Ministry of Education and Science, Spain and by the project NDTQ-Framework (TIC-5789) of the Junta de Andalucia, Spain.
REFERENCES
[26] B. Pérez, M. Polo, M. Piatini, Software Product Line Testing - A Systematic Review, 4th International
[58] P. Valderas, V. Pelechano, O. Pastor, A transformational approach to produce Web A
application prototypes from a Web Requirements Model, International Journal of Web Engineering and
Technology, 3(1) (2007) 4-42.
[59] G. Rozenberg, Handbook of Graph Grammars and Computing by Graph Transformations, Volume 1,
basado en MDA, X Workshop Iberoamericano de Ingeniería de Requisitos y Ambientes Software
[61] J. Erickson, K. Siau, Theoretical and Practical Complexity of Unified Modeling Language: Delphi Study
[66] IWT2, Unpublished report, Web Engineering and Early Testing research Group (IWT2), Department of
Jiménez, Interoperability between visual UML design applications and authoring tools for learning
[68] J.L. Fernández, A. Monzón, A Metamodel and a Tool for Software Requirements Management (poster),
systems using the equivalent transformation framework, International Journal of Innovative Computing,
Information and Control, 7(7A) (2011) 4067-4081.
[73] H. Kuo, C. Chen, Application of quality function deployment to improve the quality of Internet
shopping website interface design, International Journal of Innovative Computing, Information and
|
{"Source-Url": "https://aran.library.nuigalway.ie/bitstream/handle/10379/3414/Aragon_at_al_2012.pdf;jsessionid=59642A85B7972DD2F09054F5890CCBBA?sequence=1", "len_cl100k_base": 15740, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 75626, "total-output-tokens": 21241, "length": "2e13", "weborganizer": {"__label__adult": 0.00033092498779296875, "__label__art_design": 0.0005564689636230469, "__label__crime_law": 0.00022089481353759768, "__label__education_jobs": 0.0018014907836914065, "__label__entertainment": 6.556510925292969e-05, "__label__fashion_beauty": 0.0001804828643798828, "__label__finance_business": 0.0003046989440917969, "__label__food_dining": 0.0002841949462890625, "__label__games": 0.0005064010620117188, "__label__hardware": 0.0005831718444824219, "__label__health": 0.00034308433532714844, "__label__history": 0.0002961158752441406, "__label__home_hobbies": 7.921457290649414e-05, "__label__industrial": 0.00034999847412109375, "__label__literature": 0.0003883838653564453, "__label__politics": 0.0002073049545288086, "__label__religion": 0.00042629241943359375, "__label__science_tech": 0.0111541748046875, "__label__social_life": 7.081031799316406e-05, "__label__software": 0.005741119384765625, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.00022518634796142575, "__label__transportation": 0.0004148483276367187, "__label__travel": 0.0001829862594604492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 88522, 0.03269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 88522, 0.47574]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 88522, 0.91552]], "google_gemma-3-12b-it_contains_pii": [[0, 706, false], [706, 3011, null], [3011, 6097, null], [6097, 9321, null], [9321, 11464, null], [11464, 14551, null], [14551, 17630, null], [17630, 20179, null], [20179, 22094, null], [22094, 25099, null], [25099, 28212, null], [28212, 31237, null], [31237, 34240, null], [34240, 37113, null], [37113, 40255, null], [40255, 43293, null], [43293, 46351, null], [46351, 49377, null], [49377, 52718, null], [52718, 55712, null], [55712, 60781, null], [60781, 64080, null], [64080, 67291, null], [67291, 70332, null], [70332, 73414, null], [73414, 76442, null], [76442, 79740, null], [79740, 82378, null], [82378, 85793, null], [85793, 88522, null]], "google_gemma-3-12b-it_is_public_document": [[0, 706, true], [706, 3011, null], [3011, 6097, null], [6097, 9321, null], [9321, 11464, null], [11464, 14551, null], [14551, 17630, null], [17630, 20179, null], [20179, 22094, null], [22094, 25099, null], [25099, 28212, null], [28212, 31237, null], [31237, 34240, null], [34240, 37113, null], [37113, 40255, null], [40255, 43293, null], [43293, 46351, null], [46351, 49377, null], [49377, 52718, null], [52718, 55712, null], [55712, 60781, null], [60781, 64080, null], [64080, 67291, null], [67291, 70332, null], [70332, 73414, null], [73414, 76442, null], [76442, 79740, null], [79740, 82378, null], [82378, 85793, null], [85793, 88522, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 88522, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 88522, null]], "pdf_page_numbers": [[0, 706, 1], [706, 3011, 2], [3011, 6097, 3], [6097, 9321, 4], [9321, 11464, 5], [11464, 14551, 6], [14551, 17630, 7], [17630, 20179, 8], [20179, 22094, 9], [22094, 25099, 10], [25099, 28212, 11], [28212, 31237, 12], [31237, 34240, 13], [34240, 37113, 14], [37113, 40255, 15], [40255, 43293, 16], [43293, 46351, 17], [46351, 49377, 18], [49377, 52718, 19], [52718, 55712, 20], [55712, 60781, 21], [60781, 64080, 22], [64080, 67291, 23], [67291, 70332, 24], [70332, 73414, 25], [73414, 76442, 26], [76442, 79740, 27], [79740, 82378, 28], [82378, 85793, 29], [85793, 88522, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 88522, 0.1505]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
45d6e9067b91d9329b2feac88c3d9236cb91d615
|
[REMOVED]
|
{"Source-Url": "https://catalog.lib.kyushu-u.ac.jp/opac_download_md/3054/trcs214.pdf", "len_cl100k_base": 10179, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 61758, "total-output-tokens": 11896, "length": "2e13", "weborganizer": {"__label__adult": 0.0003955364227294922, "__label__art_design": 0.00048279762268066406, "__label__crime_law": 0.0005922317504882812, "__label__education_jobs": 0.0015974044799804688, "__label__entertainment": 0.0001627206802368164, "__label__fashion_beauty": 0.00022792816162109375, "__label__finance_business": 0.0003101825714111328, "__label__food_dining": 0.0004851818084716797, "__label__games": 0.0009517669677734376, "__label__hardware": 0.0019369125366210935, "__label__health": 0.0010557174682617188, "__label__history": 0.00040221214294433594, "__label__home_hobbies": 0.00015091896057128906, "__label__industrial": 0.0006809234619140625, "__label__literature": 0.0006566047668457031, "__label__politics": 0.0003745555877685547, "__label__religion": 0.0006918907165527344, "__label__science_tech": 0.260986328125, "__label__social_life": 0.00012600421905517578, "__label__software": 0.011627197265625, "__label__software_dev": 0.71484375, "__label__sports_fitness": 0.000362396240234375, "__label__transportation": 0.0006070137023925781, "__label__travel": 0.0002213716506958008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34980, 0.05833]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34980, 0.57547]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34980, 0.86427]], "google_gemma-3-12b-it_contains_pii": [[0, 272, false], [272, 2945, null], [2945, 7140, null], [7140, 10588, null], [10588, 11724, null], [11724, 13691, null], [13691, 17507, null], [17507, 21704, null], [21704, 25405, null], [25405, 29051, null], [29051, 31469, null], [31469, 33205, null], [33205, 34980, null]], "google_gemma-3-12b-it_is_public_document": [[0, 272, true], [272, 2945, null], [2945, 7140, null], [7140, 10588, null], [10588, 11724, null], [11724, 13691, null], [13691, 17507, null], [17507, 21704, null], [21704, 25405, null], [25405, 29051, null], [29051, 31469, null], [31469, 33205, null], [33205, 34980, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34980, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34980, null]], "pdf_page_numbers": [[0, 272, 1], [272, 2945, 2], [2945, 7140, 3], [7140, 10588, 4], [10588, 11724, 5], [11724, 13691, 6], [13691, 17507, 7], [17507, 21704, 8], [21704, 25405, 9], [25405, 29051, 10], [29051, 31469, 11], [31469, 33205, 12], [33205, 34980, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34980, 0.05727]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
ea89b0a530fe660b377f369099ae623b577d7bb7
|
[REMOVED]
|
{"len_cl100k_base": 10560, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28467, "total-output-tokens": 11682, "length": "2e13", "weborganizer": {"__label__adult": 0.0004100799560546875, "__label__art_design": 0.0005364418029785156, "__label__crime_law": 0.0005335807800292969, "__label__education_jobs": 0.0016736984252929688, "__label__entertainment": 0.00014841556549072266, "__label__fashion_beauty": 0.00025844573974609375, "__label__finance_business": 0.0006175041198730469, "__label__food_dining": 0.0005464553833007812, "__label__games": 0.001125335693359375, "__label__hardware": 0.002689361572265625, "__label__health": 0.0013265609741210938, "__label__history": 0.0005803108215332031, "__label__home_hobbies": 0.00022935867309570312, "__label__industrial": 0.0010356903076171875, "__label__literature": 0.0004239082336425781, "__label__politics": 0.00039768218994140625, "__label__religion": 0.0007901191711425781, "__label__science_tech": 0.357666015625, "__label__social_life": 0.0001361370086669922, "__label__software": 0.01190185546875, "__label__software_dev": 0.615234375, "__label__sports_fitness": 0.0004127025604248047, "__label__transportation": 0.0009560585021972656, "__label__travel": 0.0003044605255126953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34288, 0.01689]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34288, 0.70531]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34288, 0.83422]], "google_gemma-3-12b-it_contains_pii": [[0, 2478, false], [2478, 5827, null], [5827, 8697, null], [8697, 11546, null], [11546, 14398, null], [14398, 16942, null], [16942, 19992, null], [19992, 23017, null], [23017, 25651, null], [25651, 28810, null], [28810, 31775, null], [31775, 34288, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2478, true], [2478, 5827, null], [5827, 8697, null], [8697, 11546, null], [11546, 14398, null], [14398, 16942, null], [16942, 19992, null], [19992, 23017, null], [23017, 25651, null], [25651, 28810, null], [28810, 31775, null], [31775, 34288, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34288, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34288, null]], "pdf_page_numbers": [[0, 2478, 1], [2478, 5827, 2], [5827, 8697, 3], [8697, 11546, 4], [11546, 14398, 5], [14398, 16942, 6], [16942, 19992, 7], [19992, 23017, 8], [23017, 25651, 9], [25651, 28810, 10], [28810, 31775, 11], [31775, 34288, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34288, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
16f8bcafd4e4b87ddaddf9e530c9c19d14a8c900
|
Outline of This Part of Chapter 8
• Hardware Reliability Models
• A Safety Model
• A Security Model
• A Real-Time System Model
• Software Reliability Growth Models
Hardware Reliability Models
- Two component Markov reliability model with repair
- Two component Markov model with imperfect fault coverage
- WFS reliability model
Markov Reliability Model With Repair
• Consider the 2-component parallel system (no delay + perfect coverage) but disallow repair from system down state.
• Note that state 0 is now an absorbing state. The state diagram is given in the following figure.
• This reliability model with repair cannot be modeled using a reliability block diagram or a fault tree. We need to resort to Markov chains. (This is a form of dependency since in order to repair a component you need to know the status of the other component).
Markov chain has an absorbing state. In the steady-state, system will be in state 0 with probability 1. Hence steady state analysis will yield a trivial answer; transient analysis is of interest. States 1 and 2 are transient states.
- Markov chain has an absorbing state. In the steady-state, system will be in state 0 with probability 1. Hence steady state analysis will yield a trivial answer; transient analysis is of interest. States 1 and 2 are transient states.
Some authors erroneously claim that reliability models do not admit repair.
In the model on previous slide, we have component repair from state 1; system has not failed in this state.
In a reliability model we do not allow repair from system failure states (such as state 0).
Thus, there must be one or more absorbing states in a reliability model.
Markov Reliability Model With Repair (Contd.)
- Assume that the initial state of the Markov chain is 2, that is, \( \pi_2(0) = 1 \), \( \pi_k(0) = 0 \) for \( k = 0, 1 \).
- Then the system of differential Equations is written based on:
\[
\text{Rate of buildup} = \text{Rate of flow in} - \text{Rate of flow out}
\]
\text{for each state}
Markov Reliability Model With Repair (Contd.)
\[
\frac{d\pi_2(t)}{dt} = -2\lambda \pi_2(t) + \mu \pi_1(t)
\]
\[
\frac{d\pi_1(t)}{dt} = 2\lambda \pi_2(t) - (\lambda + \mu)\pi_1(t)
\]
\[
\frac{d\pi_0(t)}{dt} = \lambda \pi_1(t)
\]
Using the technique of Laplace transform, we can reduce the above system to:
\[
\begin{align*}
sp_2(s) - 1 &= -2\lambda p_2(s) + \mu p_1(s) \\
sp_1(s) &= 2\lambda p_2(s) - (\lambda + \mu)p_1(s) \\
sp_0(s) &= \lambda p_1(s) \quad \text{where } \pi(s) = \int_0^\infty e^{-st} \pi(t) \, dt
\end{align*}
\]
Solving for $\pi_0(s)$, we get:
$$\pi_0(s) = \frac{2\lambda^2}{s[s^2 + (3\lambda + \mu)s + 2\lambda^2]}$$
- After an inversion, we can obtain $\pi_0(t)$, the probability that no components are operating at time $t \geq 0$. For this purpose, we carry out a partial fraction expansion.
Markov Reliability Model With Repair (Contd.)
Inverting the transform, we get
\[ R(t) = 1 - \pi_0(t) = \frac{2\lambda^2}{\alpha_1 - \alpha_2} \left( \frac{e^{-\alpha_1 t}}{\alpha_1} - \frac{e^{-\alpha_2 t}}{\alpha_2} \right) \]
where
\[ \alpha_1, \alpha_2 = \frac{(3\lambda + \mu)^+ \sqrt{\lambda^2 + 6\lambda \mu + \mu^2}}{2} \]
Recalling that \( MTTF = \int_{0}^{\infty} R(t) \, dt \), we get:
\[
MTTF = \frac{2\lambda^2}{\alpha_1 - \alpha_2} \left[ \frac{1}{\alpha_2^2} - \frac{1}{\alpha_1^2} \right] = \frac{2\lambda^2 (\alpha_1 + \alpha_2)}{\alpha_1^2 \alpha_2^2}
\]
\[
= \frac{2\lambda^2 (3\lambda + \mu)}{(2\lambda^2)^2} = \frac{3}{2\lambda} + \frac{\mu}{2\lambda^2}
\]
• Note that the MTTF of the two component parallel redundant system, in the absence of a repair facility (i.e., $\mu = 0$), would have been equal to the first term, $\frac{3}{(2\lambda)}$, in the above expression.
• Therefore, the effect of a repair facility is to increase the mean life by $\frac{\mu}{(2\lambda^2)}$, or by a factor
$$\frac{\mu}{2\lambda^2} = \frac{\mu}{3\lambda}$$
Model made in SHARPE GUI
Parameters entered for the Model
![Image of a software interface showing model parameters and outputs]
Sharpe Input file generated by GUI
- format 8
- factor on
- markov Rel_Rep(lambda, mu)
- 2 1 2*lambda
- 1 0 lambda
- 1 2 mu
- end
- * Initial Probabilities defined:
- 2 init_Rel_Rep_2
- 1 init_Rel_Rep_1
- 0 init_Rel_Rep_0
- end
- * Initial Probabilities assigned:
- bind
- init_Rel_Rep_2 1
- init_Rel_Rep_1 0
- init_Rel_Rep_0 0
- end
- echo
***************************************************************
***************************************************************
- echo ********** Outputs asked for the model: Rel_Rep
***************
- * Initial Probability: ini1
- bind
- init_Rel_Rep_2 1
- init_Rel_Rep_1 0
- init_Rel_Rep_0 0
- end
- bind lambda 0.0002
- bind mu 1/5
- func Reliability(t) 1-tvalue(t;Rel_Rep; lambda, mu)
- loop t,1,1000,10
- expr Reliability(t)
- end
- bind lambda 0.0002
- bind mu 1/5
- var MTTAb mean(Rel_Rep, 0; lambda, mu)
- expr MTTAb
- end
Copyright © 2006 by K.S. Trivedi
Output generated by SHARPE GUI
Graph between Reliability and time
Markov Reliability Model With Imperfect Coverage
Markov Model With Imperfect Coverage
• Next consider a modification of the above example proposed by Arnold as a model of duplex processors of an electronic switching system.
• Assuming that not all faults are recoverable and that $c$ is the coverage factor which denotes the conditional probability that the system recovers given that a fault has occurred.
• The state diagram is now given by the following picture:
Markov Model With Imperfect Coverage
(Contd.)
\[
\begin{align*}
2 \lambda c \\
2 \lambda (1-c) \\
\mu \\
\lambda
\end{align*}
\]
Markov Model With Imperfect Coverage (Contd.)
- Assume that the initial state is 2 so that:
\[ \pi_2(0) = 1, \quad \pi_0(0) = \pi_1(0) = 0 \]
- Then the system of differential equations are:
\[
\frac{d\pi_2(t)}{dt} = -2\lambda_c\pi_2(t) - 2\lambda(1-c)\pi_2(t) + \mu\pi_1(t)
\]
\[
\frac{d\pi_1(t)}{dt} = 2\lambda_c\pi_2(t) - (\lambda + \mu)\pi_1(t)
\]
\[
\frac{d\pi_0(t)}{dt} = 2\lambda(1-c)\pi_2(t) + \lambda\pi_1(t)
\]
Markov Model With Imperfect Coverage (Contd.)
Using Laplace transforms as before, the above system reduces to:
\[ s\pi_2(s) - 1 = -2\lambda \pi_2(s) + \mu \pi_1(s) \]
\[ s\pi_1(s) = 2\lambda c \pi_2(s) - (\lambda + \mu) \pi_1(s) \]
\[ s\pi_0(s) = \lambda \pi_1(s) + 2\lambda (1-c) \pi_2(s) \]
• After solving the differential equations we obtain:
\[ R(t) = \pi_2(t) + \pi_1(t) \]
• From \( R(t) \), we can system \( MTTF \):
\[
MTTF = \frac{\lambda (1 + 2c) + \mu}{2\lambda [\lambda + \mu(1-c)]}
\]
• It should be clear that the system \( MTTF \) and system reliability are critically dependent on the coverage factor.
Model made in SHARPE GUI
Graph between $R(t)$ and time
Markov Reliability Model with Repair (WFS Example)
Markov Reliability Model With Repair (WFS Example)
- WFS: Workstation File System
- Assume that the computer system does not recover if both workstations fail, or if the file-server fails.
Markov Reliability Model With Repair
- States (0,1), (1,0) and (2,0) become absorbing states while (2,1) and (1,1) are transient states.
- **Note:** we have made a simplification that, once the CTMC reaches a system failure state, we do not allow any more transitions.
Markov Reliability Model With Repair (Contd.)
• If we solve for $\pi_{2,1}(t)$ and $\pi_{1,1}(t)$ then
\[
R(t) = \pi_{2,1}(t) + \pi_{1,1}(t)
\]
• For a Markov chain with absorbing states:
$A$: the set of absorbing states
$B = \Omega - A$: the set of remaining states
$\tau_{i,j}$: Mean time spent in state $i,j$ until absorption
\[
\tau_{i,j} = \int_{0}^{\infty} \pi_{i,j}(x) \, dx \quad , \quad (i, j) \in B
\]
$\tau Q_B = -\pi_B(0)$
Markov Reliability Model With Repair (Contd.)
• $Q_B$ derived from $Q$ by restricting it to only states in $B$
• Mean time to absorption $MTTA$ is given as:
$$MTTA = \sum_{(i,j) \in B} \tau_{i,j}$$
Markov Reliability Model With Repair (Contd.)
\[ Q_B = \begin{bmatrix} - ( \lambda_f + 2 \lambda_w ) & 2 \lambda_w \\ \mu_w & - ( \mu_w + \lambda_f + \lambda_w ) \end{bmatrix} \]
First solve
\[
\frac{d \pi_{2,1}(t)}{dt} = - (2 \lambda_w + \lambda_f) \pi_{2,1}(t) + \mu_w \pi_{1,1}(t)
\]
\[
\frac{d \pi_{1,1}(t)}{dt} = - (\mu_w + \lambda_f + \lambda_w) \pi_{1,1}(t) + 2 \lambda_w \pi_{2,1}(t)
\]
Markov Reliability Model With Repair (Contd.)
Then: \[ R(t) = \pi_{2,1}(t) + \pi_{1,1}(t) \]
next solve \[ \tau_{2,1} \left( -(\lambda_f + 2\lambda_w) \right) + \tau_{1,1}\mu_w = -1 \]
\[ \tau_{2,1} \left( 2\lambda_w - \tau_{1,1}(\mu_w + \lambda_f + \lambda_w) \right) = 0 \]
Then: \[ MTTF = \tau_{2,1} + \tau_{1,1} \]
- Mean time to failure is 19992 hours (input values refer to Part 2 of Chapter 8).
Model made in SHARPE GUI
Parameters assigned and output asked
SHARPE (textual) input file
- Format 8
- Factor on
```
markov repair(lamW, lamF, muW)
2_1 1_1 2*lamW
2_1 2_0 lamF
1_1 0_1 lamW
1_1 1_0 lamF
1_1 2_1 muW
end
* Initial Probabilities defined:
2_1 init_repair_2_1
1_1 init_repair_1_1
0_1 init_repair_0_1
2_0 init_repair_2_0
1_0 init_repair_1_0
end
* Initial Probabilities assigned:
bind
init_repair_2_1 1
init_repair_1_1 0
init_repair_0_1 0
init_repair_2_0 0
init_repair_1_0 0
end
```
Output asked for the model: repair
```
Initial prob. assigned
```
```
- echo
****************************************************************************
- echo ********* Outputs asked for the model: repair **************
- * Initial Probability: config1
- bind
- init_repair_1_0 0
- init_repair_0_1 0
- init_repair_2_1 1
- init_repair_2_0 0
- init_repair_1_1 0
- end
- bind lamW 0.0003
- bind lamF 0.0001
- bind muW 1
- var MTTAb mean(repair; lamW, lamF, muW)
- echo Mean time to absorption for repair
- expr MTTAb
- bind lamW 0.0003
- bind lamF 0.0001
- bind muW 1
- func Reliability(t) 1-tvalue(t;repair; lamW, lamF, muW)
- loop t,1,1000,100
- expr Reliability(t)
- end
- end
```
Copyright © 2006 by K.S. Trivedi
Output generated by SHARPE GUI
Graph between $R(t)$ and time
Markov Reliability Model
Without Repair
Markov Reliability Model without Repair: Case 1
(Contd.)
States (0,1), (1,0) and (2,0) become absorbing states
Model made in SHARPE GUI
Parameters assigned and Output asked
Output generated by SHARPE GUI
Overlapped graph $R(t)$ for with and without repair
Markov Reliability Model without Repair: Case 1
(Contd.)
\[ Q_B = \begin{bmatrix} - (\lambda_f + 2\lambda_w) & 2\lambda_w \\ 0 & - (\lambda_f + \lambda_w) \end{bmatrix} \]
\[ R(t) = \pi_{2,1}(t) + \pi_{1,1}(t) \]
\[ MTTF = \tau_{2,1} + \tau_{1,1} \]
- Mean time to failure is 9333 hours (see Part2 of Chapter 8).
3 Active Units and One Spare
3 Active Units and One Spare
- Consider a system with three active units and one spare. The active configuration is operated in TMR (Triple Modular Redundancy) mode. An active unit has a failure rate $\lambda$, while a standby spare unit has a failure rate $\mu$.

3 Active Units and One Spare (Contd.)
•
Differential equations for this CTMC are written as follows:
\[
\frac{d\pi_{3,1}}{dt} = -(3\lambda + \mu)\pi_{3,1}(t),
\]
\[
\frac{d\pi_{3,0}}{dt} = -3\lambda \pi_{3,0}(t) + (3\lambda c + \mu)\pi_{3,1}(t),
\]
\[
\frac{d\pi_{2,0}}{dt} = -2\lambda \pi_{2,0}(t) + 3\lambda \pi_{3,0}(t),
\]
\[
\frac{d\pi_F}{dt} = 3\lambda (1 - c)\pi_{3,1}(t) + 2\lambda \pi_{2,0}(t),
\]
Solving this system of equations, we get
\[
\bar{\pi}_{3,1}(s) = \frac{1}{s + 3\lambda + \mu},
\]
\[
\bar{\pi}_{3,0}(s) = \frac{3\lambda c + \mu}{(s + 3\lambda + \mu)(s + 3\lambda)},
\]
\[
\bar{\pi}_{2,0}(s) = \frac{3\lambda (3\lambda c + \mu)}{(s + 3\lambda + \mu)(s + 3\lambda)(s + 2\lambda)},
\]
3 Active Units and One Spare (Contd.)
and
$$s\pi_F(s) = \frac{3\lambda(1 - e)}{(s + 3\lambda + \mu)} + \frac{6\lambda^2(3\lambda e + \mu)}{(s + 2\lambda)(s + 3\lambda)(s + 3\lambda + \mu)}.$$
- So lifetime distribution becomes
$$\bar{f}_X(s) = \frac{3\lambda + \mu}{(s + 3\lambda + \mu)} \left[ \frac{3\lambda(1 - e)}{(3\lambda + \mu)} + \frac{3\lambda e + \mu}{3\lambda + \mu} \left\{ \frac{2\lambda}{(s + 2\lambda)} \cdot \frac{3\lambda}{(s + 3\lambda)} \right\} \right]$$
- The expression outside the square brackets is the Laplace–Stieltjes transform of EXP(3\lambda + \mu), while the expression within the braces is the LST of HYPO (2 \lambda, 3 \lambda).
3 Active Units and One Spare (contd.)
- Therefore, the system lifetime $X$ has the stage-type distribution given as in this figure.
Model made in SHARPE GUI
Parameter assigned and output asked
Output generated by SHARPE GUI
Graph between $R(t)$ and time
Operational Security
Operational Security
• Assuming that at each newly visited node of the privilege graph, the attacker chooses one of the elementary attacks that can be issued from that node only (memoryless property) and assigning to each arc a rate at which the attacker succeeds with the corresponding elementary attack, the privilege graph is transformed into a CTMC.
Operational Security (Contd.)
- The matrix $\hat{Q}_R$ obtained from generator matrix $Q$ by restricting only to the transient states is
$$
\hat{Q} = \begin{bmatrix}
-(\lambda_1 + \lambda_3) & \lambda_1 & \lambda_3 \\
0 & -\lambda_2 & 0 \\
0 & 0 & -\lambda_4
\end{bmatrix}
$$
- From this it follows that METF (Mean Effort To Failure) becomes
$$
\text{METF} = \sum_{i \in \{A,B,C\}} \tau_i = \frac{1}{\lambda_1 + \lambda_3} \left(1 + \frac{\lambda_1}{\lambda_2} + \frac{\lambda_3}{\lambda_4}\right).
$$
Recovery Block Architecture
Recovery Block Architecture
- Consider a recovery block (RB) architecture implemented on a dual processor system that is able to tolerate one hardware fault and one software fault.
- The hardware faults can be tolerated due to the hot standby hardware component with a duplication of the RB software and a concurrent comparator for acceptance tests.
Recovery Block Architecture (Contd.)
• The transition rates and their meanings are given in the table
<table>
<thead>
<tr>
<th>Transition rate</th>
<th>Value</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>$\lambda_{21}$</td>
<td>$2c\lambda_H$</td>
<td>covered hardware component failure</td>
</tr>
<tr>
<td>$\lambda_{23}$</td>
<td>$2\overline{c}\lambda_H + \lambda_{SD}$</td>
<td>Not covered hardware component failure or detected RB failure</td>
</tr>
<tr>
<td>$\lambda_{24}$</td>
<td>$\lambda_{SU}$</td>
<td>undetected RB failure</td>
</tr>
<tr>
<td>$\lambda_{13}$</td>
<td>$c\lambda_H + \lambda_{SD}$</td>
<td>detected RB failure or covered hardware component failure</td>
</tr>
<tr>
<td>$\lambda_{14}$</td>
<td>$\overline{c}\lambda_H + \lambda_{SU}$</td>
<td>Not covered hardware component failure or undetected RB failure</td>
</tr>
</tbody>
</table>
Recovery Block Architecture (Contd.)
- The system of differential equation is given by
\[
\begin{align*}
\frac{d\pi_2(t)}{dt} &= -(\lambda_{21} + \lambda_{23} + \lambda_{24})\pi_2(t), \\
\frac{d\pi_1(t)}{dt} &= -(\lambda_{13} + \lambda_{14})\pi_1(t) + \lambda_{21}\pi_2(t), \\
\frac{d\pi_{SF}(t)}{dt} &= \lambda_{23}\pi_2(t) + \lambda_{13}\pi_1(t), \\
\frac{d\pi_{UF}(t)}{dt} &= \lambda_{24}\pi_2(t) + \lambda_{14}\pi_1(t),
\end{align*}
\]
- Thus reliability of system becomes
\[
R(t) = \pi_2(t) + \pi_1(t) \\
= 2ce^{-(\lambda_H + \lambda_S)t} - (2c - 1)e^{-(2\lambda_H + \lambda_S)t}
\]
where \(\lambda_S = \lambda_{SD} + \lambda_{SU}\)
Recovery Block Architecture (Contd.)
- Similarly, the absorption probability to the safe failure state is:
\[
P_{SF} = \pi_{SF}(\infty)
= \frac{2\overline{c}\lambda_H + \lambda_{SD}}{2\lambda_H + \lambda_S} + \frac{2\epsilon\lambda_H(\epsilon\lambda_H + \lambda_{SD})}{(2\lambda_H + \lambda_S)(\lambda_H + \lambda_S)}
\]
- And the absorption probability to the unsafe failure state is:
\[
P_{UF} = \pi_{UF}(\infty)
= \frac{\lambda_{SU}}{2\lambda_H + \lambda_S} + \frac{2\epsilon\lambda_H(\overline{c}\lambda_H + \lambda_{SU})}{(2\lambda_H + \lambda_S)(\lambda_H + \lambda_S)}
\]
Model made in SHARPE GUI
Parameter assigned and Output asked
SHARPE Input file
- format 8
- factor on
markov Recovery_b_Archi(lam21, lam13, lam14, lam24, lam23)
2 lam21
2 UF lam24
2 SF lam23
1 SF lam13
1 UF lam14
end
* Initial Probabilities defined:
2 init_Recovery_b_Archi_2
1 init_Recovery_b_Archi_1
SF init_Recovery_b_Archi_SF
UF init_Recovery_b_Archi_UF
end
* Initial Probabilities assigned:
bind
init_Recovery_b_Archi_2 0
init_Recovery_b_Archi_1 0
init_Recovery_b_Archi_SF 0
init_Recovery_b_Archi_UF 0
end
echo
* Initial Probability: ini
bind
init_Recovery_b_Archi_UF 0
init_Recovery_b_Archi_2 1
init_Recovery_b_Archi_1 0
init_Recovery_b_Archi_SF 0
end
bind lam21 0.00007
bind lam13 0.00015
bind lam14 0.00012
bind lam24 0.00007
bind lam23 0.0001
func Reliability(t) 1-tvalue(t;Recovery_b_Archi; lam21, lam13, lam14, lam24, lam23)
loop t,1,1000,100
expr Reliability(t)
end
bind lam21 0.00007
bind lam13 0.00015
bind lam14 0.00012
bind lam24 0.00007
bind lam23 0.0001
var MTTAb mean(Recovery_b_Archi, UF; lam21, lam13, lam14, lam24, lam23)
expr MTTAb
end
Copyright © 2006 by K.S. Trivedi
Output generated by SHARPE GUI
Plot between $R(t)$ and time
Conditional MTTF of a Fault-Tolerant System
Conditional MTTF of a Fault-Tolerant System
- Consider the homogeneous CTMC models of three commonly used fault-tolerant system architectures.
- The simplex system $S$ consists of a single processor.
- The Duplex system (D) consists of two identical processors executing the same task in parallel.
- The Duplex system reconfigurable to the simplex system (DS) also consists of two processors executing the same task in parallel.
\begin{itemize}
\item \textbf{Simplex System (S)}: A single processor.
\item \textbf{Duplex System (D)}: Two identical processors executing the same task in parallel.
\item \textbf{Duplex System Reconfigurable to Simplex System (DS)}: Two processors executing the same task in parallel.
\end{itemize}
Conditional MTTF of a Fault-Tolerant System (Contd.)
• We compare the three architectures with respect to the probability of unsafe failure, the mean time to failure (MTTF) of the system and the conditional MTTF to unsafe failure.
• Calculating conditional MTTF $Q$ matrix becomes
$$Q = \begin{bmatrix}
Q_{TT} & Q_{TA} & Q_{TB} \\
0_{1 \times |T|} & 0 & 0_{1 \times |B|} \\
0_{|B| \times |T|} & 0_{|B| \times 1} & 0_{|B| \times |B|}
\end{bmatrix}$$
• Here $Q_{TT}$ is the partition of the generator matrix consisting of the states in $T$, $Q_{TA}$ has the transition rates from states in $T$ to states in $A$ and similarly $Q_{TB}$ has the transition rates from states in $T$ to states in $B$.
Conditional MTTF of a Fault-Tolerant System (Contd.)
- Solving for the three architectures for different parameters we have
<table>
<thead>
<tr>
<th>Measures</th>
<th>Architecture S</th>
<th>Architecture D</th>
<th>Architecture DS</th>
</tr>
</thead>
<tbody>
<tr>
<td>MTTF</td>
<td>$\frac{1}{\lambda}$</td>
<td>$\frac{1}{2\lambda}$</td>
<td>$\frac{1}{2\lambda} + \frac{c_{ds}}{\lambda}$</td>
</tr>
<tr>
<td>$\pi_{UF}(\infty)$</td>
<td>$1 - c_s$</td>
<td>$1 - c_d$</td>
<td>$1 - c_s c_{ds}$</td>
</tr>
<tr>
<td>MTTF$_{UF}$</td>
<td>$\frac{1}{\lambda}$</td>
<td>$\frac{1}{2\lambda}$</td>
<td>$\frac{1 + 2c_{ds} - 3c_s c_{ds}}{2\lambda(1 - c_s c_{ds})}$</td>
</tr>
</tbody>
</table>
*Dependability measures for the three architectures*
Real Time System: Multiprocessor Revisited
Multiprocessor Revisited
• We return to the multiprocessor model earlier discussed but we now consider system failure state ‘0’ as absorbing.
• Since task arrivals occur at the rate $\lambda$ and task service time is $\text{EXP}(\mu)$, when the reliability model is in state 2, the performance can be modeled by an $M/M/2/b$ queue.
Multiprocessor Revisited (Contd.)
- We make the following reward rate assignment to the states (soft deadline case):
\[ r_2 = \lambda [1 - q_b(2)] [P(R_b(2) \leq d)] , \]
\[ r_1 = \lambda [1 - q_b(1)] [P(R_b(1) \leq d)] \]
\[ r_0 = 0. \]
- With this reward assignment, computing the expected accumulated reward until absorption, we can obtain the approximate number of tasks successfully completed until system failure:
\[ E[Y(\infty)] = r_2 \tau_2 + r_1 \tau_1 \]
where \( \tau_2 \) and \( \tau_1 \) are given by equation (8.116) given in the textbook.
Multiprocessor Revisited (Contd.)
• Now we consider a hard deadline, instead of soft deadline so that if an accepted job fails to complete within the deadline, we will consider the system to have failed.
• Note that we have considered the infinite buffer case for simplicity
• Using the $\tau$ method, we can compute the values of $\tau_2$ and $\tau_1$ for the CTMC and the system MTTF that includes the effect of dynamic failures.
NHCTMC Model of the Duplex System
NHCTMC Model of the Duplex System
• Consider a *duplex system* with two processors, each of which has a time-dependent failure rate \( \lambda(t) = \lambda_0 \alpha t^{\alpha-1} \).

• The system shown is a non-homogeneous CTMC, because, as its name suggests, it contains one or more (globally) time-dependent transition rates.
NHCTMC Model of the Duplex System (Contd.)
- The transient behavior of a NHCTMC satisfies the linear system of first order differential equations:
\[
\frac{d\pi(t)}{dt} = \pi(t)Q(t), \quad \text{with } \pi_2(0) = 1.
\]
- The Q matrix becomes
\[
Q(t) = \begin{bmatrix}
-\lambda(t) & 0 & \lambda(t)c_1 & \lambda(t)(1 - c_1) \\
2\lambda(t)c_2 & -2\lambda(t) & 0 & 2\lambda(t)(1 - c_2) \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}
= \lambda(t)W,
\]
where
\[
W = \begin{bmatrix}
-1 & 0 & c_1 & 1 - c_1 \\
2c_2 & -2 & 0 & 2(1 - c_2) \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}.
\]
NHCTMC Model of the Duplex System (Contd.)
• When NHCTMC generator matrix can be factored in this way we can solve the equations simply
• Hence we can define an average failure rate:
\[ \bar{\lambda} = \frac{1}{t} \int_{0}^{t} \lambda(\tau) d\tau, \]
and get the solution to the NHCTMC by solving a homogeneous CTMC with the generator matrix:
\[ \bar{Q} = W \bar{\lambda}. \]
Software Reliability Growth Models
Software Reliability Growth Models
• Failure data is collected during testing
• Calibrate a reliability growth model using failure data; this model is then used for prediction
• Many SRGMs exist
– NHPP
– Jelinski Moranda
• We revisit the above models which we studied in Chapter 5, studying them now as examples of CTMCs.
Poisson Process
• The Poisson process, \( \{N(t) \mid t \geq 0\} \), is a homogeneous CTMC (pure birth type) with state diagram shown below.
• Since failure intensity is time independent, it cannot capture reliability growth. Hence we resort NHPP.
Example – Software Reliability Growth Model (NHPP)
• Consider a Nonhomogenous Poisson process (NHPP) proposed by Goel and Okumoto, as a model of software reliability growth during the testing phase. Note that the Markov property is satisfied and it is an example of a non-homogeneous CTMC.
• Assume that the number of failures \( N(t) \) occurring in time interval \((0, t]\) has a time-dependent failure intensity \( \lambda(t) \).
• Expected number of software failures experienced (and equated to the number of faults found and fixed) by time \( t \):
\[
m(t) = E[N(t)] = \int_0^t \lambda(x)dx
\]
Software Reliability Growth Model
Finite failure NHPP models
- Finite expected number of faults detected, $a$, in an infinite interval
- Expected number of faults detected by time $t$, or mean value function, denoted by $m(t) = ap = a F(t)$
- Failure intensity of the software, denoted by $\lambda(t)$: $\lambda(t) = \frac{dm(t)}{dt}$
- Failure intensity function can also be written as
\[ \lambda(t) = af(t) = [a - m(t)] h(t) \]
- $h(t)$ $\rightarrow$ failure occurrence rate per fault (hazard function)
- $[a - m(t)]$ $\rightarrow$ expected number of faults remaining, non-increasing function of time
- Nature of failure intensity depends on the nature of failure occurrence rate per fault
Example – Software Reliability Growth Model (NHPP) (Contd.)
• Using previous equation the instantaneous failure intensity can be rewritten by
\[
\lambda(t) = af'(t) = [a - m(t)]h(t)
\]
• This implies that failure intensity is proportional to expected no. of undetected faults at ‘t’
• Many commonly used NHPP software reliability growth models are obtained by choosing different failure intensities \( \lambda(t) \), e.g. Goel-Okumoto, Musa-Okumoto model etc.
Software Reliability Growth Model
*Finite failure NHPP models*
- Nature of the failure occurrence rate per fault and the corresponding NHPP model
- Constant:
- Goel-Okumoto model
\[ h(t) = b \]
- Increasing:
- S-shaped model
\[ h(t) = \frac{g^2t}{1+gt} \]
- Generalized Goel-Okumoto model
\[ h(t) = bct^{c-1}, \quad c > 1 \]
- Decreasing:
- Generalized Goel-Okumoto model
\[ h(t) = bct^{c-1}, \quad c < 1 \]
- Increasing/Decreasing:
- Log-logistic model
\[ h(t) = \frac{\lambda \kappa (\lambda t)^{\kappa-1}}{1+(\lambda t)^\kappa}, \quad \kappa > 1 \]
Example- Jelinski Moranda Model
• This model is based on the following assumptions:
– The number of faults introduced initially into the software is fixed, say, $n$.
– At each failure occurrence, the underlying fault is removed immediately and no new faults are introduced.
– Failure rate is state-dependent and is proportional to the number of remaining faults, that is, $\mu_i = i\mu$, $i = 1, 2, \ldots n$.
• Model can be described by pure death process
• The constant of proportionality $\mu$ denotes the failure intensity contributed by each fault, which means that all the remaining faults contribute the same amount to the failure intensity.
Example- Jelinski Moranda Model (Contd.)
- The mean-value function is given by
\[ m(t) = \sum_{k=0}^{n} k\pi_{n-k}(t) = n(1 - e^{-\mu t}) \]
- This can be seen as the expected reward rate at time \( t \) after assigning reward rate \( r_i = n-i \) to state \( i \).
|
{"Source-Url": "http://www.ce.sharif.edu/~b_akbari/spring2009/performance/chap8_p5_s.pdf", "len_cl100k_base": 8677, "olmocr-version": "0.1.50", "pdf-total-pages": 88, "total-fallback-pages": 0, "total-input-tokens": 119430, "total-output-tokens": 12022, "length": "2e13", "weborganizer": {"__label__adult": 0.0003843307495117187, "__label__art_design": 0.0006222724914550781, "__label__crime_law": 0.0005507469177246094, "__label__education_jobs": 0.0023632049560546875, "__label__entertainment": 0.00013446807861328125, "__label__fashion_beauty": 0.00017595291137695312, "__label__finance_business": 0.0012598037719726562, "__label__food_dining": 0.0004916191101074219, "__label__games": 0.0015544891357421875, "__label__hardware": 0.007274627685546875, "__label__health": 0.0010881423950195312, "__label__history": 0.0004279613494873047, "__label__home_hobbies": 0.0002803802490234375, "__label__industrial": 0.0015106201171875, "__label__literature": 0.0005326271057128906, "__label__politics": 0.000274658203125, "__label__religion": 0.0004837512969970703, "__label__science_tech": 0.431640625, "__label__social_life": 0.00010448694229125977, "__label__software": 0.0147705078125, "__label__software_dev": 0.53271484375, "__label__sports_fitness": 0.0003829002380371094, "__label__transportation": 0.0006575584411621094, "__label__travel": 0.00020003318786621096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25969, 0.04393]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25969, 0.63465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25969, 0.7577]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 165, false], [165, 330, null], [330, 848, null], [848, 1317, null], [1317, 1669, null], [1669, 2019, null], [2019, 2250, null], [2250, 2554, null], [2554, 2840, null], [2840, 3174, null], [3174, 3523, null], [3523, 3909, null], [3909, 3934, null], [3934, 4038, null], [4038, 4978, null], [4978, 5009, null], [5009, 5044, null], [5044, 5093, null], [5093, 5513, null], [5513, 5643, null], [5643, 6085, null], [6085, 6380, null], [6380, 6711, null], [6711, 6736, null], [6736, 6766, null], [6766, 6817, null], [6817, 7007, null], [7007, 7277, null], [7277, 7743, null], [7743, 7944, null], [7944, 8342, null], [8342, 8748, null], [8748, 8773, null], [8773, 8810, null], [8810, 9986, null], [9986, 10017, null], [10017, 10047, null], [10047, 10087, null], [10087, 10200, null], [10200, 10225, null], [10225, 10262, null], [10262, 10293, null], [10293, 10345, null], [10345, 10662, null], [10662, 10691, null], [10691, 10976, null], [10976, 11692, null], [11692, 12359, null], [12359, 12492, null], [12492, 12517, null], [12517, 12553, null], [12553, 12584, null], [12584, 12614, null], [12614, 12635, null], [12635, 12990, null], [12990, 13496, null], [13496, 13524, null], [13524, 13875, null], [13875, 14698, null], [14698, 15341, null], [15341, 15924, null], [15924, 15949, null], [15949, 15985, null], [15985, 17114, null], [17114, 17145, null], [17145, 17174, null], [17174, 17218, null], [17218, 17962, null], [17962, 18661, null], [18661, 19300, null], [19300, 19343, null], [19343, 19677, null], [19677, 20244, null], [20244, 20679, null], [20679, 20713, null], [20713, 21082, null], [21082, 21670, null], [21670, 22051, null], [22051, 22086, null], [22086, 22413, null], [22413, 22663, null], [22663, 23267, null], [23267, 23969, null], [23969, 24433, null], [24433, 25045, null], [25045, 25701, null], [25701, 25969, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 165, true], [165, 330, null], [330, 848, null], [848, 1317, null], [1317, 1669, null], [1669, 2019, null], [2019, 2250, null], [2250, 2554, null], [2554, 2840, null], [2840, 3174, null], [3174, 3523, null], [3523, 3909, null], [3909, 3934, null], [3934, 4038, null], [4038, 4978, null], [4978, 5009, null], [5009, 5044, null], [5044, 5093, null], [5093, 5513, null], [5513, 5643, null], [5643, 6085, null], [6085, 6380, null], [6380, 6711, null], [6711, 6736, null], [6736, 6766, null], [6766, 6817, null], [6817, 7007, null], [7007, 7277, null], [7277, 7743, null], [7743, 7944, null], [7944, 8342, null], [8342, 8748, null], [8748, 8773, null], [8773, 8810, null], [8810, 9986, null], [9986, 10017, null], [10017, 10047, null], [10047, 10087, null], [10087, 10200, null], [10200, 10225, null], [10225, 10262, null], [10262, 10293, null], [10293, 10345, null], [10345, 10662, null], [10662, 10691, null], [10691, 10976, null], [10976, 11692, null], [11692, 12359, null], [12359, 12492, null], [12492, 12517, null], [12517, 12553, null], [12553, 12584, null], [12584, 12614, null], [12614, 12635, null], [12635, 12990, null], [12990, 13496, null], [13496, 13524, null], [13524, 13875, null], [13875, 14698, null], [14698, 15341, null], [15341, 15924, null], [15924, 15949, null], [15949, 15985, null], [15985, 17114, null], [17114, 17145, null], [17145, 17174, null], [17174, 17218, null], [17218, 17962, null], [17962, 18661, null], [18661, 19300, null], [19300, 19343, null], [19343, 19677, null], [19677, 20244, null], [20244, 20679, null], [20679, 20713, null], [20713, 21082, null], [21082, 21670, null], [21670, 22051, null], [22051, 22086, null], [22086, 22413, null], [22413, 22663, null], [22663, 23267, null], [23267, 23969, null], [23969, 24433, null], [24433, 25045, null], [25045, 25701, null], [25701, 25969, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25969, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25969, null]], "pdf_page_numbers": [[0, 0, 1], [0, 165, 2], [165, 330, 3], [330, 848, 4], [848, 1317, 5], [1317, 1669, 6], [1669, 2019, 7], [2019, 2250, 8], [2250, 2554, 9], [2554, 2840, 10], [2840, 3174, 11], [3174, 3523, 12], [3523, 3909, 13], [3909, 3934, 14], [3934, 4038, 15], [4038, 4978, 16], [4978, 5009, 17], [5009, 5044, 18], [5044, 5093, 19], [5093, 5513, 20], [5513, 5643, 21], [5643, 6085, 22], [6085, 6380, 23], [6380, 6711, 24], [6711, 6736, 25], [6736, 6766, 26], [6766, 6817, 27], [6817, 7007, 28], [7007, 7277, 29], [7277, 7743, 30], [7743, 7944, 31], [7944, 8342, 32], [8342, 8748, 33], [8748, 8773, 34], [8773, 8810, 35], [8810, 9986, 36], [9986, 10017, 37], [10017, 10047, 38], [10047, 10087, 39], [10087, 10200, 40], [10200, 10225, 41], [10225, 10262, 42], [10262, 10293, 43], [10293, 10345, 44], [10345, 10662, 45], [10662, 10691, 46], [10691, 10976, 47], [10976, 11692, 48], [11692, 12359, 49], [12359, 12492, 50], [12492, 12517, 51], [12517, 12553, 52], [12553, 12584, 53], [12584, 12614, 54], [12614, 12635, 55], [12635, 12990, 56], [12990, 13496, 57], [13496, 13524, 58], [13524, 13875, 59], [13875, 14698, 60], [14698, 15341, 61], [15341, 15924, 62], [15924, 15949, 63], [15949, 15985, 64], [15985, 17114, 65], [17114, 17145, 66], [17145, 17174, 67], [17174, 17218, 68], [17218, 17962, 69], [17962, 18661, 70], [18661, 19300, 71], [19300, 19343, 72], [19343, 19677, 73], [19677, 20244, 74], [20244, 20679, 75], [20679, 20713, 76], [20713, 21082, 77], [21082, 21670, 78], [21670, 22051, 79], [22051, 22086, 80], [22086, 22413, 81], [22413, 22663, 82], [22663, 23267, 83], [23267, 23969, 84], [23969, 24433, 85], [24433, 25045, 86], [25045, 25701, 87], [25701, 25969, 88]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25969, 0.02139]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
852e38cbb05f3b7120ab688cda27a830b1aa3921
|
Choosing Well Your Opponents: How to Guide the Synthesis of Programmatic Strategies
Rubens O. Moraes\textsuperscript{1,2}, David S. Aleixo\textsuperscript{1}, Lucas N. Ferreira\textsuperscript{2} and Levi H. S. Lelis\textsuperscript{2}
\textsuperscript{1} Departamento de Informática, Universidade Federal de Viçosa, Brazil
\textsuperscript{2} Department of Computing Science, University of Alberta, Canada
Alberta Machine Intelligence Institute (Amii)
rubens.moraes@ufv.br, david.aleixo@ufv.br, inferrei@ualberta.ca, levi.lelis@ualberta.ca
Abstract
This paper introduces Local Learner (2L), an algorithm for providing a set of reference strategies to guide the search for programmatic strategies in two-player zero-sum games. Previous learning algorithms, such as Iterated Best Response (IBR), Fictitious Play (FP), and Double-Oracle (DO), can be computationally expensive or miss important information for guiding search algorithms. 2L actively selects a set of reference strategies to improve the search signal. We empirically demonstrate the advantages of our approach while guiding a local search algorithm for synthesizing strategies in three games, including MicroRTS, a challenging real-time strategy game. Results show that 2L learns reference strategies that provide a stronger search signal than IBR, FP, and DO. We also simulate a tournament of MicroRTS, where a synthesizer using 2L outperformed the winners of the two latest MicroRTS competitions, which were programmatic strategies written by human programmers.
1 Introduction
Programmatic strategies encode game strategies in human-understandable programs. Such programmatic encoding allows domain experts to interpret and modify computer-generated strategies, which can be valuable depending on the application domain (e.g., the games industry). Previous works have used Iterated Best Response (IBR) [Lanctot et al., 2017] as the learning algorithm for synthesizing programmatic strategies [Mariño et al., 2021]. Given a game, IBR starts with an arbitrary strategy for playing the game and it approximates a best response to it; in the next iteration, it approximates a best response to the best response. This process is repeated a number of iterations and the programmatic strategy synthesized in the last iteration is returned.
The computation of the best responses in the IBR loop is performed by searching in the programmatic space defined by a domain-specific language. Given a target strategy, the algorithm searches for a program encoding a best response to it. Previous work used local search algorithms for searching in the programmatic space [Mariño et al., 2021; Medeiros et al., 2022; Aleixo and Lelis, 2023]. The target strategy IBR provides serves as a guiding function. In the context of local search, when considering the neighbors of a candidate solution, local search algorithms prefer to accept a program that achieves a higher utility value against the target strategy. Since IBR considers a single strategy as target, the search signal is often weak. This is because the neighbors of a candidate solution that performs poorly against the target strategy are also likely to perform poorly against it—small changes to a losing program will also generate a losing program. Moreover, IBR can loop around the strategy space in games with dynamics similar to Rock, Paper, and Scissors, without making progress toward strong solutions.
In this paper, we adapt Fictitious Play (FP) [Brown, 1951] and Double Oracle (DO) [McMahan et al., 2003] to the context of programmatic strategies. FP and DO have been used in the context of neural strategies to overcome some of the weaknesses of IBR [Lanctot et al., 2017]. Despite providing a better search signal than IBR, we show that FP and DO can still fail to provide relevant information for the search. We then introduce a novel learning algorithm, Local Learner (2L), that is designed specifically for guiding local search algorithms in the synthesis of programmatic strategies. 2L uses information gathered while computing best responses to decide the set of target strategies to be used in future iterations of the algorithm as a means of optimizing the search signal.
We evaluate 2L on three two-player zero-sum games: MicroRTS [Ontañón et al., 2018], Poachers & Rangers, and Climbing Monkeys. Results show that 2L synthesized strategies that are never worse and often far superior to strategies synthesized with IBR, FP, and DO in all three domains. We also performed a simulated competition of MicroRTS with strategies synthesized with 2L, IBR, FP, DO, as well as the programmatic strategies that won the last two MicroRTS competitions, which were written by programmers. 2L obtained the highest average winning rate in our tournament.
2 Problem Definition
We consider the synthesis of programmatic strategies assuming zero-sum two-player games \( G = (P, S, s_{\text{init}}, A, T, U) \). Let \( P = \{i, -i\} \) be the pair of players; \( S \) be the set of states, with \( s_{\text{init}} \) in \( S \) being the initial state. Each player \( i \) can perform an action from a legal set of actions \( A_i(s) \) in \( A \) for a given state \( s \). The action of each player is given by a strategy, which is a function \( \sigma_i \) that receives a state \( s \) in \( S \) and returns
an action in $A_i$ for $s$. A transition function $T$ receives a state and an action for each player and deterministically returns the next state of the game, which could be a terminal state, where the utility of each player is determined. The utility function $U$ returns the value of the game at a given state (terminal or not). For $s$, the value of the game is denoted by $U(s, \sigma_i, \sigma_{-i})$ when player $i$ follows the strategy $\sigma_i$ and player $-i$, $\sigma_{-i}$. Considering that the game $G$ is zero-sum, the utility function for $-i$ is $-U(s, \sigma_i, \sigma_{-i})$. In this paper, we encode strategies for $G$ as programs written in a domain-specific language (DSL).
A DSL can be defined as a context-free grammar $(M, \Omega, R, S)$, where $M$, $\Omega$, $R$, and $S$ are the sets of non-terminals, terminals, relations defining the production rules of the grammar, and the grammar’s initial symbol, respectively. Figure 1 (right) shows an example of a DSL, where $M = \{S, C, B\}$, $\Omega = \{c_1, c_2, b_1, b_2\}$ if, then}, $R$ are the production rules (e.g., $C \rightarrow c_1$), and $S$ is the initial symbol.

The DSL in Figure 1 allows programs with a single command (e.g., $c_1$ or $c_2$) and programs with branching. We represent programs as abstract syntax trees (AST), where the root of the tree is $S$, the internal nodes are non-terminals, and the leaf nodes are terminals. Figure 1 (left) shows an example of an AST. We use a DSL $D$ to define the space of programs $\Sigma[D]$, where each program $p \in \Sigma[D]$ is a game strategy.
One solves the problem of synthesizing programmatic strategies by solving the following equation
$$\max_{\sigma_i \in \Sigma[D]} \min_{\sigma_{-i} \in \Sigma[D]} U(s_{\text{init}}, \sigma_i, \sigma_{-i}).$$
The strategies $\sigma_i$ and $\sigma_{-i}$ in $\Sigma[D]$ able to solve Equation 1 define a Nash equilibrium profile in the programmatic space. We consider a programmatic variant of PSRO [Lanctot et al., 2017] to approximate a solution to Equation 1.
3 Programmatic PSRO (PPSRO)
Let $\lambda$ be a normal-form game defined by $(\Sigma, P, U_\Sigma)$, where $\Sigma = \{\Sigma_i, \Sigma_{-i}\}$ represents a set of strategies for each player in $P = \{i, -i\}$, and $U_\Sigma$ is the utility payoff table between each pair of strategies in $\Sigma$. A mixed strategy $\sigma$ is a probability distribution over strategies $\Sigma_i$ and $\Sigma_{-i}$ for players $i$ and $-i$, respectively. An empirical game of a normal-form game contains only a subset of the strategies of the original game.
Policy-Space Response Oracles (PSRO) is a framework for learning strategies that “grow” an empirical game [Lanctot et al., 2017]. In PSRO, the empirical game starts with a single strategy in $\Sigma_i$ and $\Sigma_{-i}$ and it grows these sets by including a new strategy for each player in each iteration of the algorithm. Let a mixed strategy over the sets $\Sigma_i$ and $\Sigma_{-i}$ of the empirical game be called a meta-strategy. PSRO grows $\Sigma_i$ and $\Sigma_{-i}$ by adding best responses to meta-strategies. Once a best response is added to a set, a new meta-strategy is computed and the process is repeated. That is, given a meta-strategy $\sigma_{-i}$ (resp. $\sigma_i$), for player $-i$ (resp. $i$), the best response to $\sigma_{-i}$ (resp. $\sigma_i$) is added to $\Sigma_i$ (resp. $\Sigma_{-i}$).
PSRO generalizes algorithms such as IBR, FP, and DO depending on how the meta-strategies are computed. Let $\sigma_k = (p_1, p_2, \cdots, p_n)$ be a meta-strategy for player $k$ ($k$ can be either $i$ or $-i$). Here, $p_j$ in $\sigma_k$ represents the probability in which $\sigma_k$ plays the $j$-th strategy added to the empirical game for player $k$. PSRO generalizes IBR if the meta-strategies are of the form $(0.0, 0.0, \cdots, 1.0)$, i.e., the only strategy in the support of the meta-strategy is the last strategy added to the empirical game. If the meta-strategy $\sigma_{-i}$ with $n$ strategies is of the form $(1/n, 1/n, \cdots, 1/n)$, i.e., all the previous strategies added to the game are played with equal probability, then PSRO generalizes FP. PSRO also generalizes DO [McMahan et al., 2003] when the meta-strategy is computed by solving the empirical game. We use a variant of PSRO, which we call Programmatic PSRO (PPSRO), to approximate a solution to Equation 1. PPSRO is shown Algorithm 1.
**Algorithm 1 Programmatic PSRO**
**Require:** Game $G$, DSL $D$, learning algorithm $\Psi$.
**Ensure:** Strategy $\sigma_i$ for player $i$.
1. Initialize $\Sigma_i$ and $\Sigma_{-i}$ with one strategy each.
2. Compute utilities $U_\Sigma$ for $(\sigma_i, \sigma_{-i})$
3. while have not exhausted budget do
4. for player $i$ in $P$ do
5. Compute a meta-strategy $\sigma_i'$ with $\Psi(\Sigma, U_\Sigma)$
6. $\sigma_i' \leftarrow \text{search}(\sigma_i[-1], \sigma_{-i})$
7. $\Sigma_i \leftarrow \Sigma_i \cup \{\sigma_i'\}$
8. Compute entries in $U_\Sigma$ from $\Sigma$
9. return Last meta-strategy $\sigma_i$
PPSRO starts by initializing the set of strategies, $\Sigma_i$ and $\Sigma_{-i}$, with two arbitrary strategies (line 1). PPSRO runs a number of iterations according to a given computational budget (e.g., the number of games played). In each iteration, PPSRO invokes a learning algorithm $\Psi$ (e.g., IBR) that receives the current empirical game and returns a meta-strategy $\sigma_{-i}$ (line 5). Then, it searches in the programmatic space of strategies for a best response $\sigma_i'$ to $\sigma_{-i}$. We consider local search algorithms for computing $\sigma_i'$. The search algorithm, described in Section 4, initializes its computation with the last strategy added to the empirical game for $i$, which is denoted as $\sigma_i[-1]$ (line 6). The best response $\sigma_i'$ is then added to $\Sigma_i$. At the end, PPSRO returns the last meta-strategy $\sigma_i$ as an approximate solution for player $i$ to Equation 1 (line 9).
The choice of meta-strategies across iterations of PPSRO determines how quickly it is able to approximate a Nash equilibrium profile for the game. Previous work investigated different approaches for defining meta-strategies in the context of PSRO and neural policies [Lanctot et al., 2017; Anthony et al., 2020; Muller et al., 2020]. However, searching in programmatic space is different than searching in neural space, since the former does not have a gradient signal to guide the search. As we show in our experiments, meta-
strategies used with PSRO might not work well with PPSRO.
4 Hill Climbing for Synthesis of Strategies
Hill Climbing (HC) is a local search algorithm that starts with an arbitrary candidate solution to a combinatorial search problem and attempts to improve it with greedy changes to the candidate. We use HC to approximate best responses to strategies $\sigma_i$ in the PPSRO main loop (line 6 of Algorithm 1). HC receives the last strategy added to the empirical game for player $i$, which is denoted as $\sigma_{i-1}$, and $\sigma_{-i}$. The algorithm returns an approximate best response to $\sigma_{-i}$. This is achieved by searching in the programmatic space defined by the DSL. The starting candidate solution $\sigma_0$ of the search is $\sigma_i[-1]$. HC attempts to approximate a best response to $\sigma_{-i}$ by evaluating neighbor strategies of $\sigma_0$. We update the current candidate solution $\sigma_0$ to a neighbor $\sigma'_0$ if the value $U(\sigma', \sigma_0, \sigma_{-i})$ is greater than $U(\sigma_0, \sigma_0, \sigma_{-i})$. Otherwise, HC generates and evaluates a new neighbor solution $\sigma'_0$ of $\sigma_0$. This process is repeated until we have exhausted the search budget. HC returns the strategy encountered in search with highest $U$-value as its approximated best response to $\sigma_{-i}$.
Neighborhood solutions are produced by applying a “mutation” in $\sigma_0$’s AST. A mutation is performed by uniformly sampling a non-terminal symbol $S$ in the AST, and replacing the subtree rooted at $S$ with a new subtree. The new subtree is generated by replacing $S$ with the right-hand side of a production rule for $S$ that is selected uniformly at random. The mutation process repeatedly replaces a non-terminal leaf node in the generated program with the right-hand side of a random production rule of the DSL until the program’s AST contains only terminal symbols as leaves.
HC is initialized with a random program only in the first iteration of PPSRO; HC is initialized with the programmatic best response computed in PPSRO’s previous iteration otherwise ($\sigma_i[-1]$ in line 6 of Algorithm 1).
5 Shortcomings of Existing Approaches
The effectiveness of the search algorithm, e.g., HC, for computing a best response, depends on the computational cost of $\sigma_{-i}$ and on the information $\sigma_{-i}$ encodes, as we explain next. The meta-strategy $\sigma_{-i}$ determines how fast we can approximate a Nash equilibrium profile for the game. This is because the utility function $U(s_{\text{init}}, \sigma_{i}, \sigma_{-i})$ provides the search signal for the synthesis of a best response $\sigma_i$ to $\sigma_{-i}$ in the $[D]$ space. For example, if the meta-strategy $\sigma_{-i}$ with $n$ strategies is of the form $(1/n, 1/n, \ldots, 1/n)$, i.e., all the previous strategies synthesized in the previous iterations are in $\sigma_{-i}$’s support, then $\sigma_{-i}$ is able to provide a richer guiding signal than IBR’s meta-strategy, which accounts only for a single strategy. Note that PSRO (and PPSRO) with meta-strategies that accounts for all strategies with equal probability is equivalent to FP [Lanctot et al., 2017]. Although FP provides a richer search signal, it incurs a higher computational cost as the guiding function $U(s_{\text{init}}, \sigma_{i}, \sigma_{-i})$ requires one to evaluate all strategies in the support of the meta-strategy. Example 1 illustrates IBR’s lack of information for guiding search in the game of Poachers and Rangers (P&R).
P&R is a simultaneous-move two-player zero-sum game without ties where rangers need to protect the gates of a national park to avoid poachers getting inside. In the game, poachers need to attack at least one unprotected gate to enter the park, and rangers succeed if they protect all gates attacked by poachers. Rangers receive the utility of 1 if they protect all attacked gates and -1 otherwise. The game has a trivial Ranger’s dominant strategy, where they protect all the gates. Despite having a trivial solution, the game is particularly hard as a program synthesis task. This difficulty is inherent to the size of the programmatic solution required to solve this game. If the number of gates is arbitrarily large, current synthesizers might struggle to synthesize such long programs. For example, for a game with $n$ gates, the optimal programmatic strategy is any permutation of the instructions in the following program: $\text{defend}[1], \text{defend}[2], \ldots, \text{defend}[n]$, which we also denote as $\text{defend}[1, 2, \ldots, n]$ for conciseness.
Example 1. Let us consider a P&R instance with 2 gates. In the first iteration, IBR generates an arbitrary strategy for Rangers: $\text{defend}[2]$. In the next iteration, it computes a best response to $\text{defend}[2]$: $\text{attack}[1]$. Next, IBR computes a best response to the Poachers strategy, $\text{attack}[1]$, so it produces the strategy $\text{defend}[1]$. Then, IBR computes a best response to $\text{defend}[1]$, thus generating $\text{attack}[2]$ for Poachers. In the next iteration, IBR computes $\text{defend}[2]$ as a best response to $\text{attack}[2]$. Note that $\text{defend}[2]$ is the strategy in which IBR started the learning procedure—IBR just looped back to the beginning of the process. Since IBR uses only the last synthesized strategy, it can loop over suboptimal strategies which could delay the convergence to the optimal strategy $\text{defend}[1, 2]$.
By contrast, in FP one considers all previous strategies synthesized in the learning process. Once the empirical game has the strategies $\text{attack}[1]$ and $\text{attack}[2]$, the search algorithm is guided to synthesize the optimal $\text{defend}[1, 2]$.
DO may strike a balance between computational cost and search guidance, i.e., it includes fewer strategies than FP, but more than IBR in the support of the meta-strategy. With DO, only the strategies in the empirical game that are deemed important, i.e., that are in the support of a Nash equilibrium strategy, will be considered in search. However, DO might still miss important information for guiding local search algorithms in the context of PPSRO, as we show in Example 2.
Example 2. Let us consider a P&R instance with 5 gates. In the first iteration, DO generates two arbitrary strategies: $\text{defend}[2]$ and $\text{attack}[1]$ for Rangers and Poachers, respectively. Let us assume that PPSRO instantiated as DO generates the empirical game shown in Table 1 after a few iterations. In the following iteration, PPSRO adds a strategy for Rangers to the empirical game. This is achieved by solving the empirical game shown in Table 1 to generate a meta-strategy $\sigma_i$ for Poachers and then approximating a best response $\sigma_i$ to $\sigma_{-i}$. The last row of Table 1 shows the strategy for $-i$ in the Nash equilibrium profile for the empirical game, which is used as the meta-strategy $\sigma_{-i}$. Any strategy $\sigma_i$ for Rangers that defends at least gates 1, 2, and 5 is a best response to $\sigma_{-i}$ since the support of $\sigma_{-i}$ only accounts for
attack \{1, 2, 5\}. The best response \(\sigma_i\) does not need to defend gate 3, despite it being part of the empirical game for Poachers (in strategy attack\{1, 2, 3\}). If both attack\{1, 2, 3\} and attack\{1, 2, 5\} were in the support of \(\sigma_{-i}\), PPSRO would be forced to synthesize a strategy that defends gates 1, 2, 3, and 5. However, DO does not include attack\{1, 2, 3\} in the support of \(\sigma_{-i}\), so PPSRO is only forced to synthesize a strategy that defends gates 1, 2, and 5, which could delay the convergence of the algorithm for missing gate 3.
To address these limitations described for IBR, FP, and DO, we propose a new algorithm able to better guide the synthesis of programmatic strategies in the context of PPSRO.
### 6 Local Learner (2L)
We propose a new instance of PPSRO called Local Learner (2L), which can overcome the limitations of IBR, FP, and DO presented in the previous section. 2L defines meta-strategies that are “in between” those IBR and FP define in terms of the number of strategies in the meta-strategy’s support. 2L can use more strategies than IBR to provide a better signal to the search algorithm, but it also attempts to use fewer strategies than FP to reduce the computational cost of the evaluation. The following P&R example illustrates how 2L works.
**Example 3.** Let us consider a P&R instance with \(n \geq 2\) gates. We initialize with an arbitrary strategy (attack\{2\}) for Poachers and compute a best response to it: defend\{2\}. Next iteration, we compute a best response to defend\{2\}: attack\{1\}. Next, 2L returns a meta-strategy \(\sigma_{-i}\) for Poachers so we can compute a best response to it and add to the empirical game a new strategy for Rangers. Similarly to what FP would do, in this case, 2L returns a meta-strategy for Poachers that considers all strategies currently in the empirical game (attack\{2\} and attack\{1\}): \(\sigma_{-i} = \{0.5, 0.5\}\). Let us suppose that the search returns the best response defend\{1, 2\} to \(\sigma_{-i}\), which is added to the empirical game. 2L then returns a meta-strategy \(\sigma_i = \{0.5, 0.5\}\) for Rangers that also considers all strategies currently in the empirical game (defend\{2\} and defend\{1, 2\}). While computing a best response to \(\sigma_i\), 2L learns that the strategy defend\{2\} is redundant and can be dropped from the support of \(\sigma_i\) in future iterations. Before finding a best response \(\sigma_i\) (e.g., attack\{3\}), let us assume that the search evaluates strategies attack\{1\} and attack\{2\}. Note that defend\{2\} is a best response to only attack\{2\}, while defend\{1, 2\} is a best response to both of them. Given the strategies evaluated in search and that defend\{1, 2\} is in the support of the meta-strategy, defend\{2\} does not add new information to the search and can thus be dropped.
2L initially assumes that all strategies inserted in the empirical game is helpful to guide the search so it adds them to the support of its meta-strategy \(\sigma_{-i}\). While computing a best response to \(\sigma_{-i}\), it collects data about each strategy in \(\sigma_{-i}\) and it removes from its support all “redundant strategies”.
#### 6.1 Formal Description
Let \(\Sigma_k = \{\sigma_{1,k}, \ldots, \sigma_{n,k}\}\) be the set of strategies for player \(k\) in the empirical game in an execution of PPSRO, where \(k\) is either \(i\) or \(-i\) and \(\sigma_{j,k}\) is the \(j\)-th strategy added for \(k\) in the empirical game. Let \(\sigma_k = (p_1, \ldots, p_n)\) be a meta-strategy over \(\Sigma_k\) where \(p_j\) in \(\sigma_j\) indicates the probability in which \(\sigma_j\) plays the \(j\)-th strategy in \(\Sigma_k\). We denote \(p_j\) in \(\sigma_k\) as \(\sigma_k[j]\).
Let \(\Sigma_{\sigma_k}\) be the subset of strategies in the support of \(\sigma_k\), i.e., the strategies whose \(p_j\)-value is greater than zero in \(\sigma_k\).
While computing a best response to a meta-strategy \(\sigma_k\), 2L employs a search algorithm that evaluates a number of strategies as potential best responses to \(\sigma_k\). Let \(S\) be the set of strategies evaluated in search that are best responded by at least one strategy in \(\Sigma_{\sigma_k}\). We call helpful strategies, denoted \(\Sigma_{h,\sigma_k}\), the smallest subset of \(\Sigma_{\sigma_k}\), that contains at least one best response to any strategy in \(S\). We call redundant strategies the set \(\Sigma_{r,\sigma_k}\) minus the helpful strategies \(\Sigma_{h,\sigma_k}\).
**Example 4.** In Example 3, when computing a best response to \(\sigma_i = (0.5, 0.5, 0.5)\) with \(\Sigma_{\sigma_i} = \{\text{defend}[2], \text{defend}[1, 2]\}\) we have that \(S = \{\text{attack}[1], \text{attack}[2]\}\) and \(\Sigma_{h,\sigma_i} = \{\text{defend}[1, 2]\}\). 2L is then able to remove the redundant set \{defend\{2\}\} from \(\Sigma_{\sigma_i}\) for future iterations of the algorithm.
In practice, we are unable to compute the smallest set \(\Sigma_{h,\sigma_k}\) possible due to two reasons. First, the search might not encounter the strategies needed to prove a strategy helpful. In Example 3, if the synthesis algorithm encounters attack\{1\} but it does not encounter attack\{2\} during the search, then strategies defend\{2\} and defend\{1, 2\} would be “equally helpful” and either one could be selected depending on the tie-breaking procedure implemented. Second, finding the smallest set \(\Sigma_{h,\sigma_k}\) given \(S\) is equivalent to solving a set cover problem, which is NP-hard [Garey and Johnson, 1979]. 2L uses a polynomial-time greedy algorithm to approximate a solution to the set cover problem. Namely, we define an initial empty set \(S'\). Then, in every iteration, we select the strategy \(\sigma\) in \(\Sigma_{\sigma_k}\) that is a best response to the largest number of strategies in \(S \setminus S'\) and we add all strategies for which \(\sigma\) is a best response to \(S'\). We stop when \(S = S'\). The strategies \(\sigma\) selected from \(\Sigma_{\sigma_k}\) in this procedure approximates to \(\Sigma_{h,\sigma_k}\), which gives us an approximation to the redundant strategies.
2L works by executing the following steps:
1. Initialize \(\Sigma_{-i}\) and \(\Sigma_{\sigma_{-i}}\), with \{\sigma_{1,-i}\} for some arbitrary strategy \(\sigma_{1,-i}\); compute a best response \(\sigma_{i,-i}\) to \(\sigma_{1,-i}\) and initialize \(\Sigma_i\) and \(\Sigma_{\sigma_i}\), with \{\sigma_{1,i}\}. Define meta-strategies \(\sigma_i\) and \(\sigma_{-i}\) as \(\{1, 0\}\).
2. While there is time for learning and alternating \(k\) to be \(-i\) in one iteration and \(i\) in the next, execute:
(a) Compute a best response \( \sigma \) to \( \sigma_{-k} \) and add it to \( \Sigma_k \) and to \( \Sigma_i \); set \( \sigma_{j}[j] = 1.0/|\Sigma_{j}| \) for all \( \sigma_{j} \) in \( \Sigma_{k} \).
(b) Set \( \sigma_{-k}[j] = 0 \) and remove it from \( \Sigma_{\sigma_{-k}} \) all \( \sigma_{j} \) in \( \Sigma_{-k} \) that were estimated as redundant.
(c) Set \( \sigma_{-k}[j] = 1.0/|\Sigma_{\sigma_{-k}}| \) for all \( \sigma_{j} \) in \( \Sigma_{\sigma_{-k}} \).
2L starts by initializing the set of strategies of the empirical game and the set of strategies in the support of the meta-strategy with an arbitrary strategy for one of the players (\(-i \) in the pseudocode above). Then, it computes a best response to this arbitrary strategy and uses the best response to initialize \( \Sigma_i \) and \( \Sigma_{\sigma_{-i}} \). The meta-strategies are of the form \( \{1, 2, 3\} \) because the empirical game has a single strategy for each player (see Step 1 above). Step 2 refers to PPSRO’s loop, where it computes best responses while alternating the players. Once a best response \( \sigma \) is computed to strategy \( \sigma_{-k} \), it is added to the support of \( \sigma_k \) with uniform probability (see Step 2a).
2L estimates which strategies in the support of \( \sigma_{-k} \) are redundant while computing the best response \( \sigma \) to \( \sigma_{-k} \). In Step 2b, 2L removes the redundant strategies from the support of \( \sigma_{-k} \) and, in Step 2c, redistributes the probabilities such that each strategy in the support has equal probability.
Example 5. In Example 2, we showed that DO fails to include both \( \text{attack}[1, 2, 3] \) and \( \text{attack}[1, 2, 5] \) in the support of the meta-strategy \( \sigma_{-i} \), thus missing the guidance information \( \text{attack}[1, 2, 3] \) provides. Once the strategy \( \text{attack}[1, 2, 5] \) is added to the empirical game, the meta-strategy will automatically have both \( \text{attack}[1, 2, 3] \) and \( \text{attack}[1, 2, 5] \) in its support. In contrast with DO, 2L retains both strategies in the support of \( \sigma_{-i} \) for the next iteration as long as strategies such as \( \text{defend}[1, 2, 3] \) and \( \text{defend}[1, 2, 5] \) are evaluated in search as both \( \text{attack}[1, 2, 3] \) and \( \text{attack}[1, 2, 5] \) will be flagged as helpful.
A weakness of 2L as presented above is that it can flag as redundant a strategy that is helpful if it does not sample enough strategies in search. For example, if the meta-strategy for Rangers has both \( \text{defend}[1] \) and \( \text{defend}[2] \) in its support, but it never evaluates a strategy that attacks gate 1 in search, then \( \text{defend}[1] \) will mistakenly be removed from the meta-strategy’s support. We implement the following enhancement to fix this weakness. Whenever the search returns a best response \( \sigma \) to a meta-strategy \( \sigma_{-i} \) (resp. \( \sigma_{i} \)), we evaluate \( \sigma \) against all strategies in the empirical game, including those not in the support of \( \sigma_{-i} \) (resp. \( \sigma_{i} \)). If there is a strategy \( \sigma' \) in the empirical game that is a best response to \( \sigma \), then it must be that 2L mistakenly removed \( \sigma' \) from the support of the meta-strategy. In this case, we repeat the search for a best response with \( \sigma' \) added to the support of the meta-strategy.
This enhancement can increase the number of times the search algorithm is invoked in each iteration of the PPSRO loop. While we perform a single search per iteration with IBR, FP, and DO, in the worst case, 2L can perform a number of searches that is equal to the number of strategies in the game. This is because, in the worst case, we add all strategies of the empirical game to the support of the meta-strategy. Despite the possible extra searches, preliminary experiments showed that this enhancement improves the sampling efficiency of 2L. All results of this paper use this enhancement.
In practice, we do not have the guarantee that the search algorithm used in PPSRO’s main loop is able to return a best response to a meta-strategy. So we use whichever approximation the search returns as if it was a best response to the meta-strategy. Moreover, depending on the game, we might not be able to immediately recognize a best response to strategy once we see one as one would have to prove the strategy to be a best response. This could be problematic, for example, while implementing the enhancement that 2L re-runs the search if there is a strategy in the empirical game that is a best response to the strategy the search returns. We run our experiments in games with utilities of \(-1, 0, +1 \). If a best response cannot be easily verified (e.g., MicroRTS), then we consider that \( \sigma \) is a best response to \( \sigma' \) if \( U(s_{\text{init}}, \sigma, \sigma') = +1 \).
Once 2L reaches a computational budget, it can return different strategies as its approximated solution to Equation 1. Similarly to IBR, it can return the last strategy added to the empirical game for each player. 2L can also return a mixed strategy that is given by the distribution of strategies added to the empirical game, as FP does. We can also solve the resulting empirical game with linear programming, like DO does, and return the resulting strategy. In this paper, we assume the games have a pure dominant strategy for which IBR’s approach of returning the last strategy added to the empirical game is suitable; this is what we use in our experiments.
7 Empirical Evaluation
7.1 Problem Domains
In addition to P&R, we introduce Climbing Monkey (CM), another two-player zero-sum game with a trivial optimal strategy that is also challenging in the context of programmatic strategies. In CM, monkeys need to climb to a branch of a tree that is higher than the branch the opponent’s monkey is able to reach. The branches need to be climbed one at a time, without skipping any branch. The monkey that climbs to a higher branch wins the game. The game ends in a draw if both monkeys climb to a branch of the same height. For a tree with \( n \) branches, a dominant programmatic strategy is \( \text{climb}[1], \text{climb}[2], \ldots, \text{climb}[n] \). Similarly to P&R, CM is challenging because, depending on the number of branches, it requires one to synthesize long programs.
In P&R, learning algorithms perform better if using a larger number of strategies in the support of meta-strategies as having many strategies helps Rangers converge to a strategy that protects all gates. CM is a game where all one needs to use is the last strategy added to the empirical game, i.e., the strategy that allows the monkey to climb to the highest branch. We hypothesize that 2L is able to detect which strategies are needed in the support of the meta-strategies for these two games.
We also evaluate 2L in MicroRTS, a real-time strategy game designed for research. There is an active research community using MicroRTS as a benchmark for evaluating intelligent systems.\(^1\) MicroRTS is a game played with real-time constraints and very large action and state spaces [LeLis, 2021]. Each player can control two types of stationary units
\(^1\)https://github.com/Farama-Foundation/MicroRTS/wiki
(Bases and Barracks) and four types of mobile units (Workers, Ranged, Light, and Heavy). Bases are used to store resources and train Worker units. Barracks can train Ranged, Light, and Heavy units. Workers can build stationary units, harvest resources, and attack opponent units. Ranged, Light, and Heavy units differ in terms of hit points and attack damage. Ranged units deal damage from a distance. Ranged units in MicroRTS differ from each other by causing different amounts of damage and significantly different quantities of damage to the opponent units. Ranged units differ from each other by causing damage from long distances. In MicroRTS, a match is played on a grid, which represents the map. Different maps might require different strategies for playing well the game.
7.2 Empirical Methodology
The games of P&R and CM allow for a comparison of IBR, FP, DO, and 2L that is easy to understand and analyze as they have trivial optimal strategies. The experiments with MicroRTS allow us to compare not only existing learning algorithms with 2L, but also other methods for playing MicroRTS. Namely, we compared the programmatic strategies of IBR, FP, DO, and 2L with programmatic strategies human programmers wrote to win the last two competitions: COAC and Mayari.
We also include two programmatic strategies that have been used in the MicroRTS competition since 2017: WorkRush (WR) and LightRush (LR). LR was the winner of the 2017 competition. We use seven maps of different sizes: 8 by 8 BasesWorkers, 16 by 16 BasesWorkers, 24 by 24A BasesWorkers, 24 by 24 DoubleGame, BWResources 32 by 32, Chambers 32 by 32, and 32 by 32 BasesWorkers. We consider two starting locations (the location of player’s base) on each map. When evaluating two strategies, to ensure fairness, each strategy plays an equal number of matches in both locations against the other strategy.
We are interested in evaluating the sample efficiency of the different approaches, i.e., the strength of the strategies they synthesize as a function of the number of games they need to play to synthesize the strategies. We present plots such as the one in Figure 2, where the x-axis shows the number of games played and the y-axis a metric of performance. We measure performance in P&R in terms of number of gates Rangers protect; for CM we measure how high a monkey climbs.
In the MicroRTS plots (Figure 3) we measure performance in terms of the winning rate of the strategy synthesized by each method in a tournament. The tournament is played among strategies synthesized by all systems after playing the same number of games. In the tournament, each strategy plays against other strategies 10 times, 5 in each starting location of the map. MicroRTS matches can finish in draws. Following previous work, we assign a score of 1.0 for each win and 0.5 for each draw. The winning rate is given by adding the number of wins with half the number of draws, divided by the total number of matches [Ontañón, 2017].
Since the mutation operation we use in the hill climbing algorithm is stochastic, we perform multiple independent runs of each experiment and report the average results and standard deviation. The number of runs performed in each experiment is specified below. We use Medeiros et al. [2022]'s
7.3 Empirical Results
7.4 Simulated Competition Results (MicroRTS)
Table 2 shows the average results for a set of simulated competitions using the seven maps mentioned in the empirical methodology section. Each entry of the table shows the average winning rate and standard deviation of the row method against the column method; the last column shows the average and standard deviation across a given row of the table. The numbers in Table 2 are the average winning rate computed by simulating 5 tournaments. The strategy we use in each tournament for IBR, FP, DO, and 2L is generated as follows. We run each method 8 times, thus producing 8 different strategies each. Then, we run a round-robin evaluation among the 8 strategies of a given method and the winning strategy in
https://github.com/Coac/coac-ai-microrts
https://github.com/barvazkrav/mayariBot
4Our code is at https://github.com/rubensolv/LocalLearnerIJCAI
this evaluation is used as the method’s strategy in our tournament. For a given tournament, the winning rate is computed by having the strategy of each method play the other strategies 10 times in each map, 5 for each starting location.
2L is the only method to obtain an average winning rate above 0.50 against all other opponents; 2L also obtained the highest average winning rate when considering all opponents: 0.72 (column “Total”). In particular, it obtains average winning rates of 0.76 and 0.66 against COAC and Mayari, respectively, the winners of the two latest competitions. A Welch’s t-test shows that the difference between 2L and the competition winners COAC and Mayari, in terms of total average winning rate, is statistically significant with p < 10⁻⁵.
These results on PR, CM, and MicroRTS show that 2L’s approach for defining its meta-strategy can be quite effective in guiding a synthesizer that uses HC search.
8 More Related Works
In addition to PSRO [Lanctot et al., 2017], this work is related to programmatic policies [Verma et al., 2018], where the goal is to synthesize human-readable programs encoding policies for solving reinforcement learning problems [Bastani et al., 2018; Verma et al., 2019]. Generalized planning (GP) is also related as it deals with the synthesis of programs for solving classical planning problems [Bonet et al., 2010; Srivastava et al., 2011; Hu and De Giacomo, 2013; Aguas et al., 2018]. 2L differs from these works because it learns how to solve two-player games, while the latter focus on single-agent problems.
Mariño et al. [2021; 2022] also use local search algorithms to synthesize programmatic strategies and they also evaluate their system in MicroRTS. In terms of learning algorithms, they only use IBR, so the IBR version in our experiments can be seen as a representation of their work. Medeiros et al. [2022] presents a system for learning sketches with imitation learning as a means of speeding up the computation of programmatic best responses. They focus on the computation of best responses, so their solution can in theory be combined with any of the learning algorithms we evaluated in this paper.
9 Conclusions
In this paper, we introduced Local Learner, a learning algorithm based on the PSRO framework to guide local search algorithms on the task of synthesizing programmatic strategies. 2L uses information collected from the computation of best responses to approximate a set of helpful strategies to have in the support of 2L’s meta-strategy, which serves as a guiding function for the search. We show empirically in three games the advantages of 2L over adaptations of the learning algorithms IBR, FP, and DO to programmatic strategies. The empirical results show that 2L’s approach of using information collected during search to determine its own guiding function can be quite effective in practice. 2L is never worse than the other learning algorithms and is often far superior. In particular, in the game of MicroRTS, we simulated a competition with the last two winners of the annual MicroRTS competition, and the strategies 2L synthesized obtained the highest winning rate across all evaluated systems.
Acknowledgments
This research was supported by Canada’s NSERC and the CIFAR AI Chairs program and Brazil’s CAPES. The research was carried out using computational resources from Compute Canada. We thank the anonymous reviewers for their feedback.
References
|
{"Source-Url": "https://www.ijcai.org/proceedings/2023/0539.pdf", "len_cl100k_base": 10282, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 33927, "total-output-tokens": 12548, "length": "2e13", "weborganizer": {"__label__adult": 0.0008816719055175781, "__label__art_design": 0.0010089874267578125, "__label__crime_law": 0.0009765625, "__label__education_jobs": 0.0032100677490234375, "__label__entertainment": 0.0004749298095703125, "__label__fashion_beauty": 0.0005445480346679688, "__label__finance_business": 0.0008101463317871094, "__label__food_dining": 0.0010633468627929688, "__label__games": 0.02508544921875, "__label__hardware": 0.001983642578125, "__label__health": 0.0013208389282226562, "__label__history": 0.0011873245239257812, "__label__home_hobbies": 0.000308990478515625, "__label__industrial": 0.0013723373413085938, "__label__literature": 0.0011959075927734375, "__label__politics": 0.0008244514465332031, "__label__religion": 0.001018524169921875, "__label__science_tech": 0.306884765625, "__label__social_life": 0.00022172927856445312, "__label__software": 0.0098114013671875, "__label__software_dev": 0.63671875, "__label__sports_fitness": 0.0013933181762695312, "__label__transportation": 0.001129150390625, "__label__travel": 0.00046133995056152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45411, 0.02587]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45411, 0.20749]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45411, 0.87871]], "google_gemma-3-12b-it_contains_pii": [[0, 5329, false], [5329, 11958, null], [11958, 19088, null], [19088, 25770, null], [25770, 33101, null], [33101, 37310, null], [37310, 40506, null], [40506, 45411, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5329, true], [5329, 11958, null], [11958, 19088, null], [19088, 25770, null], [25770, 33101, null], [33101, 37310, null], [37310, 40506, null], [40506, 45411, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45411, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45411, null]], "pdf_page_numbers": [[0, 5329, 1], [5329, 11958, 2], [11958, 19088, 3], [19088, 25770, 4], [25770, 33101, 5], [33101, 37310, 6], [37310, 40506, 7], [40506, 45411, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45411, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
2d187f8ec77d1d214f770592fa1321d639cf4bcd
|
Required Readings
- **Required Reading Assignment:**
- Chapter 5 of Shen and Lipasti (SnL).
- **Recommended References:**
Also Recommended …
- More advanced pipelining
- Interrupt and exception handling
- Out-of-order and superscalar execution concepts
Question: What should the fetch PC be in the next cycle?
If the instruction that is fetched is a control-flow instruction:
- How do we determine the next Fetch PC?
In fact, how do we even know whether or not the fetched instruction is a control-flow instruction?
How to Handle Control Dependences
- Critical to keep the pipeline full with correct sequence of dynamic instructions.
- Potential solutions if the instruction is a control-flow instruction:
- Stall the pipeline until we know the next fetch address
- Guess the next fetch address (branch prediction)
- Employ delayed branching (branch delay slot)
- Do something else (fine-grained multithreading)
- Eliminate control-flow instructions (predicated execution)
- Fetch from both possible paths (if you know the addresses of both possible paths) (multipath execution)
The Branch Problem
- Control flow instructions (branches) are frequent
- 15-25% of all instructions
- Problem: Next fetch address after a control-flow instruction is not determined after $N$ cycles in a pipelined processor
- $N$ cycles: (minimum) branch resolution latency
- If we are fetching $W$ instructions per cycle (i.e., if the pipeline is $W$ wide)
- A branch misprediction leads to $N \times W$ wasted instruction slots
Importance of The Branch Problem
- Assume \( N = 20 \) (20 pipe stages), \( W = 5 \) (5 wide fetch)
- Assume: 1 out of 5 instructions is a branch
- Assume: Each 5 instruction-block ends with a branch
How long does it take to fetch 500 instructions?
- 100% accuracy
- 100 cycles (all instructions fetched on the correct path)
- No wasted work
- 99% accuracy
- 100 (correct path) + 20 (wrong path) = 120 cycles
- 20% extra instructions fetched
- 98% accuracy
- 100 (correct path) + 20 \times 2 \) (wrong path) = 140 cycles
- 40% extra instructions fetched
- 95% accuracy
- 100 (correct path) + 20 \times 5 \) (wrong path) = 200 cycles
- 100% extra instructions fetched
Simplest: Always Guess $\text{NextPC} = \text{PC} + 4$
- Always predict the next sequential instruction is the next instruction to be executed
- This is a form of next fetch address prediction (and branch prediction)
- How can you make this more effective?
- Idea: Maximize the chances that the next sequential instruction is the next instruction to be executed
- Software: Lay out the control flow graph such that the “likely next instruction” is on the not-taken path of a branch
- Profile guided code positioning $\rightarrow$ Pettis & Hansen, PLDI 1990.
- Hardware: ??? (how can you do this in hardware...)
- Cache traces of executed instructions $\rightarrow$ Trace cache
Guessing $\text{NextPC} = \text{PC} + 4$
- How else can you make this more effective?
- Idea: Get rid of control flow instructions (or minimize their occurrence)
- How?
1. Get rid of unnecessary control flow instructions $\rightarrow$ combine predicates (predicate combining)
2. Convert control dependences into data dependences $\rightarrow$ predicated execution
Branch Prediction (A Bit More Enhanced)
- Idea: Predict the next fetch address (to be used in the next cycle)
- Requires three things to be predicted at fetch stage:
- Whether the fetched instruction is a branch
- (Conditional) branch direction
- Branch target address (if taken)
- Observation: Target address remains the same for a conditional direct branch across dynamic instances
- Idea: Store the target address from previous instance and access it with the PC
- Called Branch Target Buffer (BTB) or Branch Target Address Cache
Fetch Stage with BTB and Direction Prediction
Program Counter
Address of the current branch
Direction predictor (taken?)
Cache of Target Addresses (BTB: Branch Target Buffer)
Next Fetch Address
PC + inst size
taken?
hit?
target address
More Sophisticated Branch Direction Prediction
1. **Global branch history:** tracks which direction earlier branches went.
2. **Program Counter:** stores the address of the current branch.
3. **XOR:** combines the PC and instruction size to determine the taken direction.
4. **Direction predictor (taken?):** predicts if the branch is taken.
5. **Cache of Target Addresses (BTB: Branch Target Buffer):** stores predicted target addresses.
6. **Next Fetch Address:** determines the next address to fetch based on the prediction.
The diagram illustrates the process of predicting branch directions using a combination of historical data and current program state.
Three Things to Be Predicted
Requires three things to be predicted at fetch stage:
1. Whether the fetched instruction is a branch
2. (Conditional) branch direction
3. Branch target address (if taken)
Third (3.) can be accomplished using a BTB
- Remember target address computed last time branch was executed
First (1.) can be accomplished using a BTB
- If BTB provides a target address for the program counter, then it must be a branch
- Or, we can store “branch metadata” bits in instruction cache/memory → partially decoded instruction stored in I-cache
Second (2.): How do we predict the direction?
Simple Branch Direction Prediction Schemes
- Compile time (static)
- Always not taken
- Always taken
- BTFN (Backward taken, forward not taken)
- Profile based (likely direction)
- Run time (dynamic)
- Last time prediction (single-bit)
More Sophisticated Direction Prediction
- Compile time (static)
- Always not taken
- Always taken
- BTFN (Backward taken, forward not taken)
- Profile based (likely direction)
- Program analysis based (likely direction)
- Run time (dynamic)
- Last time prediction (single-bit)
- Two-bit counter based prediction
- Two-level prediction (global vs. local)
- Hybrid
- Advanced algorithms (e.g., using perceptrons)
Review: State Machine for Last-Time Prediction
- Predict not taken
- Actually not taken
- Predict taken
- Actually taken
Transitions:
- From predict not taken to predict taken
- From predict taken to predict not taken
- From predict not taken to predict not taken
- From predict taken to predict taken
Review: Improving the Last Time Predictor
Problem: A last-time predictor changes its prediction from T→NT or NT→T too quickly
- even though the branch may be mostly taken or mostly not taken
Solution Idea: Add hysteresis to the predictor so that prediction does not change on a single different outcome
- Use two bits to track the history of predictions for a branch instead of a single bit
- Can have 2 states for T or NT instead of 1 state for each
Review: Two-Bit Counter Based Prediction
- Each branch associated with a two-bit counter
- One more bit provides hysteresis
- A strong prediction does not change with one single different outcome
- Accuracy for a loop with N iterations = (N-1)/N
TNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNTNT
Review: State Machine for 2-bit Counter
- Counter using *saturating arithmetic*
- Arithmetic with maximum and minimum values
Review: Hysteresis Using a 2-bit Counter
Change prediction after 2 consecutive mistakes
Is This Good Enough?
- ~85-90% accuracy for many programs with 2-bit counter based prediction (also called bimodal prediction)
- Is this good enough?
- How big is the branch problem?
Let’s Do the Exercise Again
- Assume $N = 20$ (20 pipe stages), $W = 5$ (5 wide fetch)
- Assume: 1 out of 5 instructions is a branch
- Assume: Each 5 instruction-block ends with a branch
How long does it take to fetch 500 instructions?
- 100% accuracy
- 100 cycles (all instructions fetched on the correct path)
- No wasted work
- 95% accuracy
- 100 (correct path) + 20 * 5 (wrong path) = 200 cycles
- 100% extra instructions fetched
- 90% accuracy
- 100 (correct path) + 20 * 10 (wrong path) = 300 cycles
- 200% extra instructions fetched
- 85% accuracy
- 100 (correct path) + 20 * 15 (wrong path) = 400 cycles
- 300% extra instructions fetched
Can We Do Better: Two-Level Prediction
- Last-time and 2BC predictors exploit “last-time” predictability
**Realization 1:** A branch’s outcome can be correlated with other branches’ outcomes
- Global branch correlation
**Realization 2:** A branch’s outcome can be correlated with past outcomes of the same branch (other than the outcome of the branch “last-time” it was executed)
- Local branch correlation
Global Branch Correlation (I)
- Recently executed branch outcomes in the execution path are correlated with the outcome of the next branch
```
if (cond1)
...
if (cond1 AND cond2)
```
- If first branch not taken, second also not taken
```
branch Y: if (cond1) a = 2;
...
branch X: if (a == 0)
```
- If first branch taken, second definitely not taken
Global Branch Correlation (II)
- branch Y: if (cond1)
- ...
- branch Z: if (cond2)
- ...
- branch X: if (cond1 AND cond2)
- If Y and Z both taken, then X also taken
- If Y or Z not taken, then X also not taken
Global Branch Correlation (III)
- Eqntott, SPEC’92: Generates truth table from Boolean expr.
```c
if (aa==2) {
aa=0;
if (bb==2) {
bb=0;
if (aa!=bb) {
....
} else {
;; B1
}
} else {
;; B2
}
} else {
;; B3
}
```
If B1 is not taken (i.e., aa==0@B3) and B2 is not taken (i.e. bb=0@B3) then B3 is certainly taken.
Capturing Global Branch Correlation
- Idea: Associate branch outcomes with “global T/NT history” of all branches
- Make a prediction based on the outcome of the branch the last time the same global branch history was encountered
Implementation:
- Keep track of the “global T/NT history” of all branches in a register → Global History Register (GHR)
- Use GHR to index into a table that recorded the outcome that was seen for each GHR value in the recent past → Pattern History Table (table of 2-bit counters)
- Global history/branch predictor
- Uses two levels of history (GHR + history at that GHR)
Two Level Global Branch Prediction
- First level: Global branch history register (N bits)
- The direction of last N branches
- Second level: Table of saturating counters for each history entry
- The direction the branch took the last time the same history was seen
How Does the Global Predictor Work?
Intel Pentium Pro Branch Predictor
- Two level global branch predictor
- 4-bit global history register
- Multiple pattern history tables (of 2 bit counters)
- Which pattern history table to use is determined by lower order bits of the branch address
Global Branch Correlation Analysis
- branch Y: if (cond1)
...
- branch Z: if (cond2)
...
- branch X: if (cond1 AND cond2)
- If Y and Z both taken, then X also taken
- If Y or Z not taken, then X also not taken
- Only 3 past branches’ directions *really* matter
Improving Global Predictor Accuracy
- Idea: Add more context information to the global predictor to take into account which branch is being predicted
- **Gshare predictor**: GHR hashed with the Branch PC
- More context information
- Better utilization of PHT
-- Increases access latency
Review: One-Level Branch Predictor
**Diagram Description:**
- **Program Counter:** The address of the current instruction.
- **Direction predictor (2-bit counters):** Determines if the branch is taken or not.
- **Cache of Target Addresses (BTB: Branch Target Buffer):** Stores target addresses.
- **PC + inst size:** Computes the address of the next fetch.
- **Next Fetch Address:** Represents the address fetched next.
**Flow:**
1. Program Counter outputs the address of the current instruction.
2. The address is checked in the Cache of Target Addresses (BTB).
3. If a hit is found, the target address is determined.
4. The target address is added to the program counter plus the instruction size to get the next fetch address.
5. The decision on whether the branch is taken is made by the direction predictor.
**Key Points:**
- The diagram illustrates the process of predicting branches and fetching instructions efficiently.
Two-Level Global History Branch Predictor
Which direction earlier branches went
Global branch history
Program Counter
Address of the current instruction
Direction predictor (2-bit counters)
taken?
PC + inst size
hit?
target address
Cache of Target Addresses (BTB: Branch Target Buffer)
Next Fetch Address
Two-Level Gshare Branch Predictor
- Which direction earlier branches went
- Global branch history
- Program Counter
- Address of the current instruction
- Direction predictor (2-bit counters)
- XOR
- PC + inst size
- Next Fetch Address
- target address
- Cache of Target Addresses (BTB: Branch Target Buffer)
An Issue: Interference in the PHTs
- Sharing the PHTs between histories/branches leads to interference
- Different branches map to the same PHT entry and modify it
- Interference can be positive, negative, or neutral
- Interference can be eliminated by dedicating a PHT per branch
- Too much hardware cost
- How else can you eliminate or reduce interference?
Reducing Interference in PHTs (I)
- Increase size of PHT
- Branch filtering
- Predict highly-biased branches separately so that they do not consume PHT entries
- E.g., static prediction or BTB based prediction
- Hashing/index-randomization
- Gshare
- Gskew
- Agree prediction
Biased Branches and Branch Filtering
- Observation: Many branches are biased in one direction (e.g., 99% taken)
- Problem: These branches *pollute* the branch prediction structures → make the prediction of other branches difficult by causing “interference” in branch prediction tables and history registers
- Solution: Detect such biased branches, and predict them with a simpler predictor (e.g., last time, static, ...)
Reducing Interference: Gshare
- Idea 1: Randomize the indexing function into the PHT such that probability of two branches mapping to the same entry reduces
- Gshare predictor: GHR hashed with the Branch PC
+ Better utilization of PHT
+ More context information
- Increases access latency
Reducing Interference: Agree Predictor
- Idea 2: Agree prediction
- Each branch has a “bias” bit associated with it in BTB
- Ideally, most likely outcome for the branch
- High bit of the PHT counter indicates whether or not the prediction agrees with the bias bit (not whether or not prediction is taken)
+ Reduces negative interference (Why???)
-- Requires determining bias bits (compiler vs. hardware)
Why Does Agree Prediction Make Sense?
- Assume two branches have taken rates of 85% and 15%.
- Assume they conflict in the PHT
Let’s compute the **probability they have opposite outcomes**
- **Baseline predictor:**
- \[ P(b1 \text{ T}, b2 \text{ NT}) + P(b1 \text{ NT}, b2 \text{ T}) \]
- \[ = (85\% \times 85\%) + (15\% \times 15\%) = 74.5\% \]
- **Agree predictor:**
- Assume bias bits are set to T (b1) and NT (b2)
- \[ P(b1 \text{ agree, b2 disagree}) + P(b1 \text{ disagree, b2 agree}) \]
- \[ = (85\% \times 15\%) + (15\% \times 85\%) = 25.5\% \]
- Works because most branches are biased (not 50% taken)
Reducing Interference: Gskew
- **Idea 3: Gskew predictor**
- Multiple PHTs
- Each indexed with a different type of hash function
- Final prediction is a majority vote
+ Distributes interference patterns in a more randomized way (interfering patterns less likely in different PHTs at the same time)
-- More complexity (due to multiple PHTs, hash functions)
More Techniques to Reduce PHT Interference
- The bi-mode predictor
- Separate PHTs for mostly-taken and mostly-not-taken branches
- Reduces negative aliasing between them
- The YAGS predictor
- Use a small tagged “cache” to predict branches that have experienced interference
- Aims to not to mispredict them again
- Alpha EV8 (21464) branch predictor
Can We Do Better: Two-Level Prediction
- Last-time and 2BC predictors exploit only “last-time” predictability for a given branch
- Realization 1: A branch’s outcome can be correlated with other branches’ outcomes
- Global branch correlation
- Realization 2: A branch’s outcome can be correlated with past outcomes of the same branch (in addition to the outcome of the branch “last-time” it was executed)
- Local branch correlation
Local Branch Correlation
for (i=1; i<=4; i++) { }
If the loop test is done at the end of the body, the corresponding branch will execute the pattern \((1110)^n\), where 1 and 0 represent taken and not taken respectively, and \(n\) is the number of times the loop is executed. Clearly, if we knew the direction this branch had gone on the previous three executions, then we could always be able to predict the next branch direction.
More Motivation for Local History
- To predict a loop branch “perfectly”, we want to identify the last iteration of the loop
- By having a separate PHT entry for each local history, we can distinguish different iterations of a loop
- Works for “short” loops
Capturing Local Branch Correlation
- **Idea:** Have a per-branch history register
- Associate the predicted outcome of a branch with “T/NT history” of the same branch
- Make a prediction based on the outcome of the branch the last time the same local branch history was encountered
- Called the local history/branch predictor
- Uses two levels of history (Per-branch history register + history at that history register value)
Two Level Local Branch Prediction
- **First level:** A set of local history registers (N bits each)
- Select the history register based on the PC of the branch
- **Second level:** Table of saturating counters for each history entry
- The direction the branch took the last time the same history was seen
Two-Level Local History Branch Predictor
Which directions earlier instances of *this branch* went
Direction predictor (2-bit counters)
Program Counter
Address of the current instruction
PC + inst size
taken?
hit?
target address
Cache of Target Addresses (BTB: Branch Target Buffer)
Next Fetch Address
Two-Level Predictor Taxonomy
- BHR can be global (G), per set of branches (S), or per branch (P)
- PHT counters can be adaptive (A) or static (S)
- PHT can be global (g), per set of branches (s), or per branch (p)
Can We Do Even Better?
- Predictability of branches varies
- Some branches are more predictable using local history
- Some using global
- For others, a simple two-bit counter is enough
- Yet for others, a bit is enough
Observation: There is heterogeneity in predictability behavior of branches
- No one-size fits all branch prediction algorithm for all branches
Idea: Exploit that heterogeneity by designing heterogeneous branch predictors
Hybrid Branch Predictors
- **Idea:** Use more than one type of predictor (i.e., multiple algorithms) and select the “best” prediction
- E.g., hybrid of 2-bit counters and global predictor
- **Advantages:**
- Better accuracy: different predictors are better for different branches
- Reduced *warmup* time (faster-warmup predictor used until the slower-warmup predictor warms up)
- **Disadvantages:**
- Need “meta-predictor” or “selector”
- Longer access latency
Alpha 21264 Tournament Predictor
- Minimum branch penalty: 7 cycles
- Typical branch penalty: 11+ cycles
- 48K bits of target addresses stored in I-cache
- Predictor tables are reset on a context switch
Are We Done w/ Branch Prediction?
- Hybrid branch predictors work well
- E.g., 90-97% prediction accuracy on average
- Some “difficult” workloads still suffer, though!
- E.g., gcc
- Max IPC with tournament prediction: 9
- Max IPC with perfect prediction: 35
Are We Done w/ Branch Prediction?
Some Other Branch Predictor Types
- Loop branch detector and predictor
- Loop iteration count detector/predictor
- Works well for loops with small number of iterations, where iteration count is predictable
- Used in Intel Pentium M
- Perceptron branch predictor
- Learns the *direction correlations* between individual branches
- Assigns weights to correlations
- Hybrid history length based predictor
- Uses different tables with different history lengths
Intel Pentium M Predictors
The advanced branch prediction in the Pentium M processor is based on the Intel Pentium® 4 processor’s [6] branch predictor. On top of that, two additional predictors to capture special program flows, were added: a Loop Detector and an Indirect Branch Predictor.
Figure 2: The Loop Detector logic
Figure 3: The Indirect Branch Predictor logic
Perceptron Branch Predictor (I)
- **Idea:** Use a perceptron to learn the correlations between branch history register bits and branch outcome.
- **A perceptron learns a target Boolean function of N inputs**
- Each branch associated with a perceptron
- A perceptron contains a set of weights $w_i$
- Each weight corresponds to a bit in the GHR
- How much the bit is correlated with the direction of the branch
- Positive correlation: large + weight
- Negative correlation: large - weight
- Prediction:
- Express GHR bits as 1 (T) and -1 (NT)
- Take dot product of GHR and weights
- If output > 0, predict taken
Perceptron Branch Predictor (II)
Prediction function:
Dot product of GHR and perceptron weights
\[ y = w_0 + \sum_{i=1}^{n} x_i w_i. \]
Output compared to 0
Bias weight (bias of branch independent of the history)
Training function:
\[
\text{if } \text{sign}(y_{out}) \neq t \text{ or } |y_{out}| \leq \theta \text{ then }
\]
\[
\text{for } i := 0 \text{ to } n \text{ do }
\]
\[
w_i := w_i + tx_i
\]
\[
\text{end for}
\]
\[
\text{end if}
\]
Perceptron Branch Predictor (III)
- **Advantages**
+ More sophisticated learning mechanism → better accuracy
- **Disadvantages**
-- Hard to implement (adder tree to compute perceptron output)
-- Can learn only linearly-separable functions
e.g., cannot learn XOR type of correlation between 2 history bits and branch outcome
Prediction Using Multiple History Lengths
- **Observation**: Different branches require different history lengths for better prediction accuracy
- **Idea**: Have multiple PHTs indexed with GHRs with different history lengths and intelligently allocate PHT entries to different branches
---
State of the Art in Branch Prediction
- See the Branch Prediction Championship
Figure 1. The TAGE-SC-L predictor: a TAGE predictor backed with a Statistical Corrector predictor and a loop predictor
Another Direction: Helper Threading
- **Idea:** Pre-compute the outcome of the branch with a separate, customized thread (i.e., a helper thread)
Branch Confidence Estimation
- **Idea:** Estimate if the prediction is likely to be correct
- i.e., estimate how “confident” you are in the prediction
- **Why?**
- Could be very useful in deciding how to speculate:
- What predictor/PHT to choose/use
- Whether to keep fetching on this path
- Whether to switch to some other way of handling the branch, e.g. dual-path execution (eager execution) or dynamic predication
- ...
How to Estimate Confidence
- An example estimator:
- Keep a record of correct/incorrect outcomes for the past $N$ instances of the “branch”
- Based on the correct/incorrect patterns, guess if the current prediction will likely be correct/incorrect
What to Do With Confidence Estimation?
- An example application: Pipeline Gating
Issues in Fast & Wide Fetch Engines
I-Cache Line and Way Prediction
- Problem: Complex branch prediction can take too long (many cycles)
- Goal
- Quickly generate (a reasonably accurate) next fetch address
- Enable the fetch engine to run at high frequencies
- Override the quick prediction with more sophisticated prediction
- Idea: Get the predicted next cache line and way at the time you fetch the current cache line
- Example Mechanism (e.g., Alpha 21264)
- Each cache line tells which line/way to fetch next (prediction)
- On a fill, line/way predictor points to next sequential line
- On branch resolution, line/way predictor is updated
- If line/way prediction is incorrect, one cycle is wasted
Figure 3. Alpha 21264 instruction fetch. The line and way prediction (wrap-around path on the right side) provides a fast instruction fetch path that avoids common fetch stalls when the predictions are correct.
Issues in Wide Fetch Engines
- Wide Fetch: Fetch multiple instructions per cycle
- Superscalar
- VLIW
- SIMT (GPUs’ single-instruction multiple thread model)
Wide fetch engines suffer from the branch problem:
- How do you feed the wide pipeline with useful instructions in a single cycle?
- What if there is a taken branch in the “fetch packet”?
- What is there are “multiple (taken) branches” in the “fetch packet”?
Fetching Multiple Instructions Per Cycle
- Two problems
1. **Alignment** of instructions in I-cache
- What if there are not enough \((N)\) instructions in the cache line to supply the fetch width?
2. **Fetch break**: Branches present in the fetch block
- Fetching sequential instructions in a single cycle is easy
- What if there is a control flow instruction in the \(N\) instructions?
- Problem: *The direction of the branch is not known but we need to fetch more instructions*
- These can cause effective fetch width \(<\) peak fetch width
Wide Fetch Solutions: Alignment
- **Large cache blocks**: Hope N instructions contained in the block
- **Split-line fetch**: If address falls into second half of the cache block, fetch the first half of next cache block as well
- Enabled by banking of the cache
- Allows sequential fetch across cache blocks in one cycle
- Intel Pentium and AMD K5
Split Line Fetch
Cache Banking
Memory Map
Cache
Need alignment logic:
## Short Distance Predicted-Taken Branches
### First Iteration (Branch B taken to E)
<table>
<thead>
<tr>
<th>Block 0100</th>
<th>Block 0101</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Bank 0
Bank 1
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
First Iteration: Branch B taken to E
<table>
<thead>
<tr>
<th>E</th>
<th>F</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Second Iteration (Branch B fall through to C)
<table>
<thead>
<tr>
<th>E</th>
<th>F</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Second Iteration: Branch B fall through to C
Techniques to Reduce Fetch Breaks
- **Compiler**
- Code reordering (basic block reordering)
- Superblock
- **Hardware**
- Trace cache
- **Hardware/software cooperative**
- Block structured ISA
Basic Block Reordering
- Not-taken control flow instructions not a problem: no fetch break: make the likely path the not-taken path
- Idea: Convert taken branches to not-taken ones
- i.e., reorder basic blocks (after profiling)
- Basic block: code with a single entry and single exit point
- Code Layout 1 leads to the fewest fetch breaks
Basic Block Reordering
- Advantages:
+ Reduced fetch breaks (assuming profile behavior matches runtime behavior of branches)
+ Increased I-cache hit rate
+ Reduced page faults
- Disadvantages:
-- Dependent on compile-time profiling
-- Does not help if branches are not biased
-- Requires recompilation
Superblock
- Idea: Combine frequently executed basic blocks such that they form a single-entry multiple exit larger block, which is likely executed as straight-line code
+ Helps wide fetch
+ Enables aggressive compiler optimizations and code reordering within the superblock
-- Increased code size
-- Profile dependent
-- Requires recompilation
Superblock Formation (I)
Is this a superblock?
Tail duplication:
duplication of basic blocks
after a side entrance to
eliminate side entrances
→ transforms
a trace into a superblock.
Superblock Code Optimization Example
Original Code
- **opA**: mul r1<-r2,3
- **opB**: add r2<-r2,1
- **opC**: mul r3<-r2,3
Code After Superblock Formation
- **opA**: mul r1<-r2,3
- **opB**: add r2<-r2,1
- **opC**: mov r3<-r1
Code After Common Subexpression Elimination
- **opA**: mul r1<-r2,3
- **opB**: add r2<-r2,1
- **opC’**: mul r3<-r2,3
Techniques to Reduce Fetch Breaks
- Compiler
- Code reordering (basic block reordering)
- Superblock
- Hardware
- Trace cache
- Hardware/software cooperative
- Block structured ISA
Trace Cache: Basic Idea
- A trace is a sequence of executed instructions.
- It is specified by a start address and the branch outcomes of control transfer instructions.
- Traces repeat: programs have frequently executed paths.
- Trace cache idea: Store the dynamic instruction sequence in the same physical location.
(a) Instruction cache.
(b) Trace cache.
Reducing Fetch Breaks: Trace Cache
- Dynamically determine the basic blocks that are executed consecutively
- Trace: Consecutively executed basic blocks
- Idea: Store consecutively-executed basic blocks in physically-contiguous internal storage (called trace cache)

- Basic trace cache operation:
- Fetch from consecutively-stored basic blocks (predict next trace or branches)
- Verify the executed branch directions with the stored ones
- If mismatch, flush the remaining portion of the trace
Trace Cache: Example
Fetch Address A
Instruction Cache
Trace Cache
Line-Fill Buffer
Instruction Latch
To Instruction Buffers
hit?
Take output from trace cache if trace cache hit; otherwise, take output from instruction cache.
An Example Trace Cache Based Processor
Multiple Branch Predictor
What Does A Trace Cache Line Store?
- 16 slots for instructions. Instructions are stored in decoded form and occupy approximately five bytes for a typical ISA. Up to three branches can be stored per line. Each instruction is marked with a two-bit tag indicating to which block it belongs.
- Four target addresses. With three basic blocks per segment and the ability to fetch partial segments, there are four possible targets to a segment. The four addresses are explicitly stored allowing immediate generation of the next fetch address, even for cases where only a partial segment matches.
- Path information. This field encodes the number and directions of branches in the segment and includes bits to identify whether a segment ends in a branch and whether that branch is a return from subroutine instruction. In the case of a return instruction, the return address stack provides the next fetch address.
Trace Cache: Advantages/Disadvantages
+ Reduces fetch breaks (assuming branches are biased)
+ No need for decoding (instructions can be stored in decoded form)
+ Can enable dynamic optimizations within a trace
-- Requires hardware to form traces (more complexity) → called fill unit
-- Results in duplication of the same basic blocks in the cache
-- Can require the prediction of multiple branches per cycle
-- If multiple cached traces have the same start address
-- What if XYZ and XYT are both likely traces?
A 12K-uop trace cache replaces the L1 I-cache
- Trace cache stores decoded and cracked instructions
- Micro-operations (uops): returns 6 uops every other cycle
- x86 decoder can be simpler and slower
Required Readings for Next Lecture
- **Required Reading Assignment:**
- Chapter 5 and Chapter 9 of Shen and Lipasti (SnL).
- **Recommended References:**
|
{"Source-Url": "http://www.ece.cmu.edu/~ece740/f15/lib/exe/fetch.php?media=18-740-fall15-lecture05-branch-prediction-afterlecture.pdf", "len_cl100k_base": 8633, "olmocr-version": "0.1.53", "pdf-total-pages": 93, "total-fallback-pages": 0, "total-input-tokens": 130958, "total-output-tokens": 13488, "length": "2e13", "weborganizer": {"__label__adult": 0.00044918060302734375, "__label__art_design": 0.0006060600280761719, "__label__crime_law": 0.0004489421844482422, "__label__education_jobs": 0.0028896331787109375, "__label__entertainment": 0.00012052059173583984, "__label__fashion_beauty": 0.00026798248291015625, "__label__finance_business": 0.0003001689910888672, "__label__food_dining": 0.0004589557647705078, "__label__games": 0.0013666152954101562, "__label__hardware": 0.0115966796875, "__label__health": 0.0004322528839111328, "__label__history": 0.0004839897155761719, "__label__home_hobbies": 0.0002598762512207031, "__label__industrial": 0.0015716552734375, "__label__literature": 0.0003330707550048828, "__label__politics": 0.0004472732543945313, "__label__religion": 0.0007658004760742188, "__label__science_tech": 0.099365234375, "__label__social_life": 9.66787338256836e-05, "__label__software": 0.00707244873046875, "__label__software_dev": 0.86865234375, "__label__sports_fitness": 0.0006198883056640625, "__label__transportation": 0.0011529922485351562, "__label__travel": 0.0002419948577880859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37762, 0.01709]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37762, 0.62343]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37762, 0.81498]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 527, false], [527, 764, null], [764, 1032, null], [1032, 1609, null], [1609, 2047, null], [2047, 2733, null], [2733, 3425, null], [3425, 3796, null], [3796, 4342, null], [4342, 4587, null], [4587, 5251, null], [5251, 5858, null], [5858, 6106, null], [6106, 6539, null], [6539, 6843, null], [6843, 7358, null], [7358, 9480, null], [9480, 9608, null], [9608, 9697, null], [9697, 9883, null], [9883, 10548, null], [10548, 11038, null], [11038, 11392, null], [11392, 11604, null], [11604, 12000, null], [12000, 12679, null], [12679, 13025, null], [13025, 13121, null], [13121, 13374, null], [13374, 13763, null], [13763, 14135, null], [14135, 15068, null], [15068, 15384, null], [15384, 15694, null], [15694, 16062, null], [16062, 16350, null], [16350, 16888, null], [16888, 17262, null], [17262, 17798, null], [17798, 18423, null], [18423, 18960, null], [18960, 19549, null], [19549, 20063, null], [20063, 20559, null], [20559, 20820, null], [20820, 21250, null], [21250, 21635, null], [21635, 21946, null], [21946, 22237, null], [22237, 22682, null], [22682, 23228, null], [23228, 23495, null], [23495, 23763, null], [23763, 23876, null], [23876, 24512, null], [24512, 25006, null], [25006, 25827, null], [25827, 26279, null], [26279, 26615, null], [26615, 27016, null], [27016, 27361, null], [27361, 27679, null], [27679, 28215, null], [28215, 28556, null], [28556, 28725, null], [28725, 28761, null], [28761, 29444, null], [29444, 29655, null], [29655, 29728, null], [29728, 30153, null], [30153, 30712, null], [30712, 31068, null], [31068, 31142, null], [31142, 31701, null], [31701, 31905, null], [31905, 32250, null], [32250, 32634, null], [32634, 33112, null], [33112, 33160, null], [33160, 33296, null], [33296, 33644, null], [33644, 33836, null], [33836, 34195, null], [34195, 34947, null], [34947, 35181, null], [35181, 35338, null], [35338, 35479, null], [35479, 36481, null], [36481, 36999, null], [36999, 37382, null], [37382, 37762, null], [37762, 37762, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 527, true], [527, 764, null], [764, 1032, null], [1032, 1609, null], [1609, 2047, null], [2047, 2733, null], [2733, 3425, null], [3425, 3796, null], [3796, 4342, null], [4342, 4587, null], [4587, 5251, null], [5251, 5858, null], [5858, 6106, null], [6106, 6539, null], [6539, 6843, null], [6843, 7358, null], [7358, 9480, null], [9480, 9608, null], [9608, 9697, null], [9697, 9883, null], [9883, 10548, null], [10548, 11038, null], [11038, 11392, null], [11392, 11604, null], [11604, 12000, null], [12000, 12679, null], [12679, 13025, null], [13025, 13121, null], [13121, 13374, null], [13374, 13763, null], [13763, 14135, null], [14135, 15068, null], [15068, 15384, null], [15384, 15694, null], [15694, 16062, null], [16062, 16350, null], [16350, 16888, null], [16888, 17262, null], [17262, 17798, null], [17798, 18423, null], [18423, 18960, null], [18960, 19549, null], [19549, 20063, null], [20063, 20559, null], [20559, 20820, null], [20820, 21250, null], [21250, 21635, null], [21635, 21946, null], [21946, 22237, null], [22237, 22682, null], [22682, 23228, null], [23228, 23495, null], [23495, 23763, null], [23763, 23876, null], [23876, 24512, null], [24512, 25006, null], [25006, 25827, null], [25827, 26279, null], [26279, 26615, null], [26615, 27016, null], [27016, 27361, null], [27361, 27679, null], [27679, 28215, null], [28215, 28556, null], [28556, 28725, null], [28725, 28761, null], [28761, 29444, null], [29444, 29655, null], [29655, 29728, null], [29728, 30153, null], [30153, 30712, null], [30712, 31068, null], [31068, 31142, null], [31142, 31701, null], [31701, 31905, null], [31905, 32250, null], [32250, 32634, null], [32634, 33112, null], [33112, 33160, null], [33160, 33296, null], [33296, 33644, null], [33644, 33836, null], [33836, 34195, null], [34195, 34947, null], [34947, 35181, null], [35181, 35338, null], [35338, 35479, null], [35479, 36481, null], [36481, 36999, null], [36999, 37382, null], [37382, 37762, null], [37762, 37762, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37762, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37762, null]], "pdf_page_numbers": [[0, 0, 1], [0, 527, 2], [527, 764, 3], [764, 1032, 4], [1032, 1609, 5], [1609, 2047, 6], [2047, 2733, 7], [2733, 3425, 8], [3425, 3796, 9], [3796, 4342, 10], [4342, 4587, 11], [4587, 5251, 12], [5251, 5858, 13], [5858, 6106, 14], [6106, 6539, 15], [6539, 6843, 16], [6843, 7358, 17], [7358, 9480, 18], [9480, 9608, 19], [9608, 9697, 20], [9697, 9883, 21], [9883, 10548, 22], [10548, 11038, 23], [11038, 11392, 24], [11392, 11604, 25], [11604, 12000, 26], [12000, 12679, 27], [12679, 13025, 28], [13025, 13121, 29], [13121, 13374, 30], [13374, 13763, 31], [13763, 14135, 32], [14135, 15068, 33], [15068, 15384, 34], [15384, 15694, 35], [15694, 16062, 36], [16062, 16350, 37], [16350, 16888, 38], [16888, 17262, 39], [17262, 17798, 40], [17798, 18423, 41], [18423, 18960, 42], [18960, 19549, 43], [19549, 20063, 44], [20063, 20559, 45], [20559, 20820, 46], [20820, 21250, 47], [21250, 21635, 48], [21635, 21946, 49], [21946, 22237, 50], [22237, 22682, 51], [22682, 23228, 52], [23228, 23495, 53], [23495, 23763, 54], [23763, 23876, 55], [23876, 24512, 56], [24512, 25006, 57], [25006, 25827, 58], [25827, 26279, 59], [26279, 26615, 60], [26615, 27016, 61], [27016, 27361, 62], [27361, 27679, 63], [27679, 28215, 64], [28215, 28556, 65], [28556, 28725, 66], [28725, 28761, 67], [28761, 29444, 68], [29444, 29655, 69], [29655, 29728, 70], [29728, 30153, 71], [30153, 30712, 72], [30712, 31068, 73], [31068, 31142, 74], [31142, 31701, 75], [31701, 31905, 76], [31905, 32250, 77], [32250, 32634, 78], [32634, 33112, 79], [33112, 33160, 80], [33160, 33296, 81], [33296, 33644, 82], [33644, 33836, 83], [33836, 34195, 84], [34195, 34947, 85], [34947, 35181, 86], [35181, 35338, 87], [35338, 35479, 88], [35479, 36481, 89], [36481, 36999, 90], [36999, 37382, 91], [37382, 37762, 92], [37762, 37762, 93]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37762, 0.01813]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
6004cde500707dff7ebbdcf6feab02f483e35c5e
|
A Software Surety Analysis Process
Sharon Trauth, Pat Tempel
Prepared by
Sandia National Laboratories
Albuquerque, New Mexico 87185 and Livermore, California 94550
for the United States Department of Energy
under Contract DE-AC04-94AL85000
Approved for public release; distribution is unlimited.
DISCLAIMER
Portions of this document may be illegible in electronic image products. Images are produced from the best available original document.
A Software Surety Analysis Process
Sharon Trauth
Pat Tempel
Prepared as part of the
High Consequence System Surety
Process Development Project
Sandia National Laboratories
Albuquerque, NM 87185
Abstract
As part of the High Consequence System Surety project, this work was undertaken to explore one approach to conducting a surety theme analysis for a software-driven system. Originally, plans were to develop a theoretical approach to the analysis, and then to validate and refine this process by applying it to the software being developed for the Weight and Leak Check System (WALS), an automated nuclear weapon component handling system. As with the development of the higher level High Consequence System Surety Process, this work was not completed due to changes in funding levels. This document describes the software analysis process, discusses its application in a software environment, and outlines next steps that could be taken to further develop and apply the approach to real projects.
A Software Surety Analysis Process
Contents
Definitions .................................................................................................................................. 3
Introduction ................................................................................................................................. 4
Capturing the Hardware Fault Tree Analysis Process ............................................................... 4
Software Fault Tree Analysis Process ...................................................................................... 8
Future Steps .............................................................................................................................. 24
Summary and Conclusions ........................................................................................................ 24
Acknowledgments .................................................................................................................... 26
Bibliography ............................................................................................................................. 26
Figures
Figure 1: Hardware Fault Tree Analysis Process ................................................................. 5
Figure 2: Software and Hardware Fault Tree Analysis Processes Overlayed with a General Systems Design Approach ................................................................. 9
Figure 3: A Portion of a Hardware Fault Tree ................................................................. 15
Figure 4: Example System Structure ................................................................................. 16
Figure 5: Example Software Documentation for Design and Implementation ................. 20
Figure 6: Example Software Documentation for Design Implementation ...................... 22
Definitions
**High Consequence**
Varies with the operation and customer, but is a consequence judged to be severe, for example resulting in significant loss of investment or loss of life.
**Fault Tree**
An analysis documented in a diagram which indicate paths through which the fault could occur. Where multiple paths exist, two options for failure are possible. One option is that the failure can be caused by any of the paths identified singularly; this situation is represented by an OR gate to connect the failure paths. The second option is that the failure paths must occur at the same time in order for the higher level failure to actually occur; this situation is represented by an AND gate connecting the failure paths.
**Surety**
As defined by the High Consequence System Surety Project Team, surety includes safety, security, control, reliability, and quality.
**System**
For the discussion in this document, system refers to the combined hardware-software end product. Although a systems approach must also include facilities and procedures, these are not explicitly covered in this discussion. However, to the extent possible, the reader may extend applicable principles into the facilities and procedures portions of a system.
**Probabilistic Risk Assessment (PRA)**
A process by which all the potential outcomes of a planned activity are identified, along with the probability of their occurrence and their associated consequence(s).
**Introduction**
Considerable debate exists today regarding whether software can be used in safety critical applications, such as in many weapon components. The prevailing assumption is that, since software is extraordinarily more complex than mechanical hardware, it cannot be analyzed sufficiently well to verify the absence of safety-critical faults. Consequently, the approach taken today is that software not be exclusively used in such applications, but rather coupled with analyzable, characterizable mechanical devices whose behavior can be well predicted under the environments of concern and whose presence will guarantee the device failure in a safe state under a given environment. In the development of such mechanical devices for weapon systems, e.g., stronglink safing devices, a fairly well-understood, iterative, but minimally documented process is used to assure that component level design approaches are sound and justified and that components and piece parts will continue to meet established safety requirements during the production phase of the product life cycle. This component-level process integrates with a comprehensive approach applied at the system level to assure overall safety is achieved and maintained.
The work described in this paper was undertaken to explore the extension and applicability of the structured methodology used in mechanical safing devices to software. The approach considered here is not intended to imply that software can be used in safety critical applications, but rather that when the process is successfully applied the results may offer the design engineer a better set of information from which to make design decisions regarding whether additional safety features are really needed, and why. Further, the approach is intended to ultimately offer a comprehensive documentation scheme that will identify, for current and future responsible engineers, exactly which aspects of the software are critical, why they are critical, the testing done to verify the design approach implemented in the software, and the potential ramifications if changes were to be incorporated. Without such an approach, even though extensive work may have been done to assure the absence of critical defects, once a change is introduced, there may be no way of knowing what analyses or tests need to be repeated to ensure continued absence of critical defects.
The approach described was developed by first capturing and documenting the existing approach used in the mechanical design arena, followed by detailed exploration of its extension to software, with an emphasis on walking through the extension applied to specific software examples. The discussion presented in this paper flows from that exploratory work, so that the methodology is presented and the implications explored by walking through specific, simple software examples. Since the actual project under which this work began was redirected, completion of the documentation approach did not occur. The discussion in this paper therefore focuses on the work completed and provides a discussion of what next steps should occur to fully develop the methodology and demonstrate its applicability to a real software project.
**Capturing the Hardware Fault Tree Analysis Process**
The process presently used during the component level design of mechanical safing devices was reviewed extensively with design engineers, both at the component level itself and at the system level to assure adequate integration. The resulting process documentation is shown in Figure 1.
The process begins at the complete system level, in the conceptual phase of the project, wherein the critical safety requirements are explored, established, and extended to the next level component subsystems (such as the firing or aft subsystems). The preliminary
Figure 1: Hardware Fault Tree Analysis Process
Component Designers
Next Assembly Designers
Production Agency Personnel
as required
Consensus?
Yes
Identify Subsystem/Component Failure Mode Scenarios
Identify Failure Enabling Design Details
Revise Events Based on Credibility
Develop Component Level Fault Tree (Normal/Abnormal)
Identify Base Events
Peer Review
Figure 1 (continued)
I
Yes
Identify /S/ Features on Preliminary Documents
Assign Probabilities by Normal/Abnormal
Develop Validation Test Plans
Generate Safety Document
Develop Acceptability Maintenance Requirements
Finalize Safety Critical Features /S/ Documentation
Monitor Acceptability as Required
Complete with PRA, Reliability personnel
Iterate with Production Agency
Figure 1 (continued)
conceptual approaches to assure safety against the critical scenarios are developed, reviewed, solidified, and then extended once again to the next level deeper in detail for the "significant" components.
Throughout this iterative process, systems, subsystems, and component level engineers discuss requirements, potential failure modes of the system, design alternatives, and associated implications. Ultimately, safety requirements are partitioned, extrapolated, and extended to the individual component levels for detailed design to proceed. From there, the iterative process continues, but at a more detailed level, until component design is completed, failure mode analysis is finished, and validation of the design is successfully implemented.
The process then focuses on the production and long term concerns for the device and its piece parts, homing in on the identification of specific features (contours, devices, material properties, etc.) which, if not produced correctly, could allow a fault state to occur with ultimate potential for an unsafe system condition.
**Software Fault Tree Analysis Process**
In exploring the extension of the process depicted in Figure 1 into the software arena, it was found that virtually all approaches, on a conceptual basis, could be applicable to software. As the software fault tree analysis process emerged, it became obvious that the hardware and software processes worked in parallel during the general product development process. Figure 2 illustrates a consolidated version of the hardware process depicted in Figure 1 (top path) coupled with the counterpart process for software (bottom path). Also shown in Figure 2 is a generalized product development process (middle path), and how the hardware and software fault tree analysis processes overlay and integrate with development. The following discussion explores each facet of the software fault tree analysis process, discusses its intent and implications, and illustrates its applicability to software through a specific, though simplified, software example. In the discussion that follows, the text refers to the title of a box within the flow of Figure 2 by using boldface type.
The software fault tree analysis process is also coupled with High Consequence System Surety (HCS²) Process (ref: SAND 94-3223) in the following ways. The functions performed in the boxes shown on the first page of Figure 2, up to conducting the joint requirements review, indicate more detail regarding the actions involved in the HCS² process through the decision step to determine if the surety theme is acceptable. The steps of Figure 2 are principally applicable to the lower level subsystems developed to meet a higher system (or end product) need. To understand these interactions, consider an aircraft being developed for delivery to the commercial airline industry. The airplane represents the higher level system depicted in the HCS² process, while the guidance system would represent the system (or subsystem) level for the process in Figure 2. The remaining steps in Figure 2 represent the tasks undertaken in the Conduct Surety Theme Analysis step of the HCS² process, as coupled with a generalized product development process and as applied to a subportion, or component of the higher level "system" represented in the HCS² process diagram. To complete the development of the overall higher level system, the general approach indicated in Figure 2 would be applied to all subsystems, and integrated together for the overall systems approach. This aspect is indicated by the Integrate Elements step and subsequent steps identified in the HCS² process.
Figure 2: Software and Hardware Fault Tree Analysis Processes Overlayed with a General Systems Design Approach
Scenarios and Enabling
Identify Failure Mode Scenarios and Enabling Design Details
Revise Events Based on Credibility
Develop Detailed Design for Hardware
Develop Detailed Design for Software
Identify Failure Mode Scenarios and Enabling Design Details
Revise Software Events Based on Credibility
Develop Preliminary Component Fault Tree
Joint Review of Component Level and Low Level Events
Develop Preliminary Software Fault Tree
Software Review of Lower Level Events
Peer Review for Consensus
Preliminary Identification of Critical Components
Refine Hardware Fault Tree and Identify Base Events
Implement Hardware Design
Implement Software Design
Refine Software Fault Tree and Identify Base Events
Preliminary Identification of Critical Components
Figure 2 (continued)
I Develop Software
I Development Plan
Assign Probabilities and Confidence Levels
Develop Hardware Validation Test Plan
Hardware Validation / Development Testing of Components
Software Validation / Development Testing of Modules
Joint Validation Test Plan
Develop and Finalize Surety Document
Joint Validation Testing
Define maintenance or acceptance requirements
Finalize Hardware and Surety Critical Software and Documentation
Monitor as needed
Figure 2 (continued)
The analysis process begins at the integrated hardware/software (system) level during the early conceptual stages of the product. At this point in time, the system requirements may be available either in draft or final form. When requirements are still draft, the process will be less formal and will likely result in considerable iteration through the initial stages of the analysis at the system level.
An important first step in the fault tree process is the formulation of a basic identification of high consequence event(s). A high consequence event is one whose consequence is judged to be severe enough to cause significant loss of investment or loss of life. It is important to establish the boundaries of the software being studied and to examine failures within those boundaries that might lead to a high consequence event. This means that when a specific software function is defined, it needs to be stated in terms of its boundaries - what input conditions are valid, who has authorization, when it should not be performed, and so on. Situations that could lead to possible critical failures include: (1) a system performing a function that it should not perform; (2) a system failing to perform a function it is supposed to perform; or (3) a system successfully performing a function under the specific conditions when the function should not be performed. A software system which operates a valve it is not supposed to operate (1), which fails to maintain a required door lock (2), or which provides access to an unauthorized user on February 29 (3), are examples of what could be critical, high consequence events.
In identifying the event(s), the requirements and possible use environments for the system form the basis for future analysis. Typically, representatives from a broad experience base would engage in dialog to identify the event(s). Human error is considered an enabler for a high consequence event. When multiple events are identified, as with a possible safety critical event and one which has only reputational consequences, they may be prioritized based on the potential significance of their undesirable outcomes. Multiple events could be identified for any of the surety concerns, such as quality, reliability, safety, security, and control. Analysis proceeds for one single event at a time.
The analysis proceeds with a review of the system requirements vs. design concepts for their expected ability to meet these requirements, and an analysis to identify failure modes. This portion of the process is intended to identify possible ways failures could occur which would bring about the occurrence of the undesired high consequence event. Often these activities are conducted concurrently, perhaps in a single meeting. The activities may be conducted more independently as the complexity and formality of the project warrants. The participants in these reviews are typically the systems engineers, as well as the component engineers who only may be known based upon the degree of specificity in the preliminary conceptual design. For example, in the first of these meetings, the review and requirements examination may be conducted by the systems engineers in conjunction with the next level subsystem designers. Subsequently, as the details of the design unfold, additional meetings may be needed involving deeper level component engineers with next assembly and systems engineers. In a project of this layered type of complexity, it is likely that several meetings would be needed, iteratively, throughout the initial development stages as more detail becomes known about both specific requirements and design concepts.
When reviewing the system requirements, the participants try to gain a deep understanding of the surety requirements, evaluate their achievability, and prioritize their potential consequences. Surety requirements need to include what the system will not do, and the associated performance specifications. While it is often easy to develop and verify software which will successfully do what you want it to do, it is often not easy to verify that it will not do the desired function when it should not be permitted. Thus,
circumstances need to be well defined and understood for which the undesired function is not to be performed. To illustrate this point, a requirement might state that the software is to permit access when the user enters the correct numerical sequence. That the software successfully achieves this requirement is measurable. But without a specification regarding what is to happen when the incorrect sequence is entered, it is even possible that the access could be provided, say, when the user enters an alphabetic sequence. At this stage in the development process, there is likely to be variability in the amount of information both known and specified about a given requirement. In addition, some aspects will be better specified at a deeper level, i.e., at the software equivalent of a component level. Thus, the participants will need to resolve such variations and agree on the necessary specificity to properly define the system.
In any case, the hardware portion of the review will focus on a clear understanding of the physical boundaries of the system and the system interfaces. Participants develop an understanding of the functions, environment and the operations of the system, specifying both normal (environments the system is expected to operate in) and abnormal (environments that are out of the systems operating range) environments. Similarly, for software, the participants reach an understanding of what constitutes normal use conditions (correct expected input data, task or event sequence), and what will be considered as abnormal input (incorrect data, inappropriate task or event sequence). Special attention will need to be placed on defining the software performance in the event that the abnormal condition occurs.
During this review the preliminary design concepts will be examined along with details regarding operation and maintenance. This portion of the review focuses on establishing engineering confidence that the system design can be expected to meet the surety requirements that exist and their associated understanding. Participants will be trying to establish the potential for specific design concepts and approaches to reduce the occurrence of system failures.
As the requirements and design concepts solidify, the team can begin to analyze failure modes. Portions of the design whose failure would have a potentially significant, or high consequence, are examined in as much detail as possible. This examination is usually a coordinated brainstorming session attended by system experts to develop a list of undesired events, including possible hardware-initiated failure modes. For each possible failure, the team identifies the potential consequences of the malfunction, tracing through the event failure, coupled with local interactions, and extended to next level functions. For example, it might be noted that permitting access to an individual who enters the incorrect password is the undesirable event. The team would analyze the possible ways the system might permit this to happen. As an example, the team might determine that one possible way the fault could occur is for the software to correctly determine the password was invalid, but give the user access anyway. Another way might be that the software correctly determines the incorrect password and denies access by not sending the enable signal to the mechanical lock, but that access is provided anyway if the mechanical lock was left in the enabling position after completion of the previous correct user functions. Thus, the focus at this stage of the analysis is on the combined system level performance. However, when specific functions within the design concepts have been allocated to software or to hardware, the analysis can focus on the associated specifics to the extent possible. Since this portion of the analysis involves component-level designers (both hardware and software) and occurs before any software implementation, it may be possible to avoid the selection of software implementation approaches which could lead to the unwanted problems. After all credible events and potential failures are listed, the events would be prioritized starting with the most critical event.
As a result of the previous interrelated and sometimes concurrent activities, the team determines the system failure events. For each area of major concern, such as security, reliability, safety, or control, the top level event is determined. Examples are: incorrect X-ray level causes patient death; equipment malfunction causes fabrication shutdown and substantial profit loss; or improper signal causes inadvertent detonation. In order to avoid any possible confusion, analyses should proceed separately for each of the identified events, since a failure mode causing an explosion may be caused by a completely different aspect of the design than one that leads to inadvertent access. The top event(s) is (are) then identified in a fault tree diagram. This step is typically done by a team including system designers, some component or subsystem designers, and software developers, and often includes representatives from the production team or agency and possibly software maintenance staff.
Often the identification of the top level event(s) is done in conjunction with completing the fault tree analysis at a high level. The fault tree is a graphical model of event combinations that can lead to the occurrence of a specific hazard or event. The system fault tree events are developed by successively breaking down events into lower level events. The analysis may take the team down to failure events for a subsystem and several specific components before the component design actually proceeds and a detailed component level fault tree is generated. The information developed up to this point is then entered into a system fault tree diagram using one of several available software packages. Event numbers are assigned during the process of generating the fault tree. Each event consists of a unique designation that is denoted with a unique code consisting of a letter and a series of digits. The "higher" level event is the output of the gate and the "lower" event is the input to the gate. Each event number is unique to a given fault tree. Figure 3 illustrates this documentation convention for an actual hardware safing device.
After the preliminary fault tree is developed as just discussed, a review of the system fault tree is conducted with necessary personnel to verify the events and make any necessary changes. Once again, this is generally conducted jointly at this early stage in the development process. Again, representatives from the systems, component (both hardware and software), and production or maintenance organizations are generally included in the review.
At this point, enough should be understood about the system to begin a decomposition of the conceptual design into the various hardware components which will be needed and which will require further design and development. For example, a system might require a motor, a cooling subsystem, a pumping subsystem, and a monitoring subsystem.
It is also possible to break the system into software components as well. It may be less obvious to the reader how this software decomposition might proceed, so we will consider what might happen by exploring an example which begins with a software conceptual design.
Suppose we were to design a system which opens a valve when an authorized operator requests the valve to be opened. For such a system, we would expect the hardware components to consist of the valve and the computer system which authenticates the operator and opens the valve. Conceptually, the software would perform the authentication and initiate the opening of the valve. This type of preliminary concept begins a high level functional allocation to the hardware and software portions of the system.
Proceeding further, the software might include a module which captures input from the operator, one which authenticates the operator, and one which causes the valve to open. Were we to proceed further we might look in more depth at the authentication component.
Subsystem Fails
IDSSL Fails To Provide Isolation
By Abnormal Environment
DSSL Housing Material Thickness
Stronglink/Weaklink Thermal Race Fails
Events: T001-10, D8, D9, E3-E5
OR: OR1-8
AND: AND1
(or module) of the software and determine that it would need a component that retrieves the correct user identification data, one which retrieves the user input from the module which captured user data, one which compares the two sets of data for a match, and one which returns the message regarding user authentication or an error message. Figure 4 illustrates this example. Although this example is rather simplistic, it will be referred to later to illustrate some further concepts in this methodology.
![Diagram of system structure]
**Figure 4: Example System Structure**
Even though at this point the software design is mostly high level and conceptual, it is important to begin adopting a mindset focused on identifying inherent weaknesses, or faults, in the approach. At this point, a brainstorming session could be employed to identify what could go wrong at the subsystem or component level and to refine the design hierarchy.
As development of the software design progresses, there comes a point where it is believed that all major module/component decomposition has been identified. A **review of the design against the software requirements** to assure that there are no glaring oversights or deficiencies would be conducted using the completed hierarchy. This review should include appropriate software development staff and is intended to identify any problems requiring resolution. For example, from Figure 4, focusing on the authenticate operator component, the review session could reveal that an additional component is needed to "reset" the system, so that the next operator could not open the valve using the previous operator's authorization. This need may be uncovered during discussion of the possible ways that the software could allow an unauthorized individual to open the valve.
The participants in this review should all have a clear understanding of the software design approach, modules, functional allocation, and requirements. Participants should focus on verifying that the proposed design concepts are expected to meet the requirements, and that there are no obvious flaws in the design logic and algorithms which could lead to unwanted high consequence events.
Assigning probabilities starts at the system level but often the numeric allocation is made at the subsystem or component level. Probabilistic Risk Assessment (PRA) and Reliability personnel have thus far generally assigned the probabilities. For hardware safety systems, the probabilities may be assigned based on the definition of normal and abnormal environments. For hardware reliability concerns, a system reliability figure is established and failure mechanisms established. Often, a reliability model is compiled from a similar system or collection of similar components and is used to assign a specific reliability number to a particular subsystem or component.
To date, software has been deemed sufficiently complex and without failure mode models which are themselves reliable to a high confidence level. Thus, software is typically assigned a failure probability of one and the analysis proceeds for hardware exclusively. Methods for allocation of probabilities to software, other than 1, need further research. In fact, later discussion illustrates one approach which may prove useful in identifying failure probabilities (and conversely reliabilities) for software. In order to proceed with the analysis for software, the process has to temporarily assume a failure probability of 1 for the hardware, so that the fault paths that arise in the software portion of the system may be examined freely and thoroughly. The analysis also proceeds with the traditional failure probability of 1 assigned to the software so that the hardware fault paths may be thoroughly examined and identified.
Up until this point the focus has been on assuring that the design concepts to be employed are likely to perform as expected and not lead to the occurrence of high consequence events. It is this perspective of the requirements which set these activities and reviews apart from those traditionally viewed as "quality" reviews. Returning again to our example, it may be determined that the best option for the system is to include a mechanical "lock" on the valve, and require the opening task to have both a software "authorization" and a hardware "enable." It should be clear that without dialog between the hardware and software development teams, the needs which may emerge from either point of view may not be apparent to the other side. For instance, the hardware team could notice that the software would have to obtain data from a permanent memory chip, and merely assume that the software development team will build this into their design. Conversely, the software team could identify the need for a mechanical device to couple with the software, but have no natural mechanism to discuss this possibility. Consequently, it is essential to have a joint hardware-software requirements review with respect to higher level fault tree events. Both the hardware and software teams have been working on their respective fault paths, identifying their own mitigation options, and will have considerable information to share. This joint review, however, must not resort to merely summary information, for detailed discussion is necessary to be sure significant faults are not overlooked. The review of higher level events for hardware and software will determine if the proper top level event is being analyzed and also to determine if all credible events have been identified.
Each development team now proceeds with developing further detail on their respective components. The results of the work thus far in this process will be the identification of those portions of the software (or hardware) which have a perceived "risk" or likelihood of failing and causing a high consequence event. Also identified are those portions of the software which are of less concern from a failure perspective -- possibly a prioritization of concerns will be obvious.
For our example, if it were possible to assure that no matter how the component which captures input data were to fail, the component which compares and returns the authorization message would always return the correct answer, then the capture input
component would be "less critical" than the perform comparison and return message components.
The software design begins to take on the character where the high consequence events are localized to specific software components. It will now be possible to concentrate efforts to mitigate the consequences only where the major concerns really are. We would thus expect to find a majority of our efforts regarding verification, review, validation, and detailed documentation to be applied to those modules which are identified in the high consequence failure paths. Clearly, though, if resources and schedule permit, the development team may want to apply a similar level of effort to lesser consequence components to assure the absence of errors which could adversely impact customer satisfaction.
The development team participants now face analyzing the software design in detail, before the code has been written, to identify potential failure paths and to assure that proper decisions are made regarding these paths. The analysis done to **identify software subsystem failure mode scenarios** proceeds iteratively during this detailed phase in the development. At this time, the attention turns to the specific algorithms, execution sequences, memory utilization, and error handling. To illustrate the mindset essential to this portion of the analysis, we will examine several situations using our example in Figure 4.
It is likely that software providing user authorization will employ some form of data encryption algorithms. Before the software code is actually written, the analysis team would closely critique the elected approach, looking for all the possible ways the software could fail and lead to the undesired event -- say permitting someone to open the valve who is not an authorized individual. A close examination might reveal several scenarios under which this might happen. First, the software might capture the user information incorrectly such that on comparison with the "real" stored information, it would be a match. Second, the software might capture the user information correctly, correctly compare with the stored valid information, find the mismatch, but return authorization anyway. Third, the software might capture the user information correctly, but compare with the wrong data from memory, such that there is a match and authorization is verified. As many of these types of scenarios are identified as the development team can generate. This information alerts the developers to areas where special efforts to mitigate these scenarios from occurring will be necessary.
Another example can be seen in the data encryption algorithm and reprocessing sequence. Two approaches come to mind for the processing sequence. First, the software could capture the user information, encrypt it, retrieve encrypted data from memory, compare, and do a checksum calculation. Alternatively, the software could capture the user information, retrieve an encrypted value from a memory location, decrypt it, compare with the user input and compute a checksum. In this latter approach, a possible high consequence security fault could occur. In yet another scenario, the software could implement the selected encrypt/decrypt algorithm incorrectly, yielding incorrect checksums and also leading to a high consequence fault.
As such scenarios are being identified, the fault tree begins to emerge. The next level of detail is determined as the team **identifies software failure enabling design details**. The development team, with the assistance of system designers, hardware designers, and production personnel as required, look for all possible ways for a failure to occur within the critical software component or module. Again returning to our example, we explore some ways in which the software could compare the information as entered by the operator with the wrong stored information and return authorization inappropriately.
Consider that the system has been used once and correctly gave access to an authorized individual to open the valve. With the task successfully completed, the computer system is left on. The next user (unauthorized) now interacts with the system to open the valve. The individual correctly enters his unauthorized information. Now, since the software design did not include "reset," the contents of the temporary storage location from the previous user access is used, and the comparison is made with the contents of memory. Since the first user's information was valid, the software will now again authorize access. In this way, no matter what subsequent users enter, access could always be granted until system restart. In this case, the enabling design feature would be the storage of the user information in the temporary location.
A second possibility exists for this fault to occur if the software were to employ a lookup table to identify the location of the correct information for the comparison. A single point error in the table entries could cause the software to look in the wrong spot for the information, and subsequently compare to find a match. The enabling design detail in this case would be the lookup table. Yet a third mechanism for failure might be that the software uses multiple memory locations for the storage of the correct information. The software might incorrectly retrieve only a subset of the necessary locations, such that upon comparison with the correctly entered, encrypted information from the second user, a match might result. The enabling design detail here would be the retrieval sequence in the software.
Typically in concurrent discussions, the identified enabling design details are reviewed for their credibility based on the design and use scenarios as presently understood. For example, if the software were to employ an automatic restart, including memory initialization for every user request to the system, the scenario wherein the information from the previous operator is used would be impossible. Thus, methodically, one by one, each scenario is examined and the team will revise the software events based on credibility. Events not needed in the fault tree analysis are eliminated.
It is important to note that for the activities just discussed the team participants first assume that the undesirable event will occur and then begin to look for the possible, credible ways by which it could occur. As in hardware fault tree analysis, these events are compiled together in a tree, using logical AND and OR gates. In this manner, the preliminary software fault tree (and similarly the preliminary hardware fault tree) are generated. Figure 5 shows how this approach could be applied to the example just presented.
By completing this process in detail for those portions of the software which are identified as critical in their ability to cause an undesirable high consequence event, it is possible to identify in advance of software implementation those approaches which will lead to the occurrence of the fault. The software team will want to review lower level events against design options to identify what has to be done to assure that the incorrect implementation is not coded into the software and what tests will be needed to demonstrate the acceptability of the implementation.
After the initial independent development of the hardware and software fault trees and their respective reviews, a joint hardware/software component-level review of lower level events is done to determine if enough detail has been given to the fault tree analysis, and to assure similar interpretation of the events and joint consensus regarding their importance. In addition, this interaction provides a further opportunity to be sure software features and requirements which impact hardware are properly communicated to the hardware folks and vice versa. For example, consider the case where the software
analysis has determined that a single software failure could result in inadvertent opening of a valve. The potential consequences of this failure may be significant enough that the software team recommends inclusion of a hardware backup enabling device. This recommendation must be communicated early in the project, and this joint review provides a forum for the recommendation to be explored in detail to the consensus of the hardware and software design teams.
The outcome of this joint review process may necessitate changes in either fault tree, so that both design teams may need to independently refine hardware and software fault trees as the designs are being implemented. This may simply require additional details or may require fault path modifications and event changes. As part of this process, the software development team continues its analysis of increasingly more detail until the events are determined to require no further decompositional information. This implies that a decision needs to be made as to what level of detail the fault tree will be developed. Once the detail level is established, this will signify that the appropriate limit of resolution has been reached for the fault tree. The lowest level faults indicated on the resulting tree are considered base events and the developers identify base events, as indicated by M1, M2, M3, and M4 in Figure 5, on the fault tree.
Once the changes are implemented and the fault tree refined, the software team conducts a peer review to assure complete coverage of the concerns and consensus regarding the results of the analysis. The review would typically involve the software development and also maintenance team(s) and associated surety experts, and may involve the next assembly and component designers, and possibly Production Agency staff. If consensus is not reached at this time, the software development team would then go back and review subsystem failure mode scenarios, design details, software fault tree events, base events, and the software level fault tree, revising as needed and iterating with peer review until consensus is reached.
Given that the fault tree is now complete and that most certainly software development has progressed to some degree in parallel with the analyses, software code whose functions or algorithms were identified by the analysis as critical must now be analyzed and reviewed to assure adequacy and to identify them on preliminary documents. Many options could be used for identifying these critical software design features on associated software documentation. The option chosen for this work is to use /c/ (pentagon c, for "critical"). The /c/ designation parallels the /s/ used in safety critical hardware. The /c/ designation would be used in each type of documentation of the particular feature, along with the critical event number. Types of documentation to receive the annotation include scripts, flow charts, information models, hierarchy charts, data flow diagrams, code listings, algorithm definitions and mathematical proofs, test plans, test results, user screens, relational tables, variable definitions, memory allocation schemes, or any other form of documentation used to define the software design for designers, evaluators, users, or maintainers. The /c/ designator indicates that the design feature or approach was judged to be critical and that failure would contribute to the occurrence of a high consequence event. This identification alerts the necessary individuals that changes could have adverse affects, and that additional evaluations may be needed to validate and justify changes.
Returning to the example of Figure 5 and considering events M3 and M4, we notice that it is crucial to be certain that the correct memory locations are used for the comparison. Thus, any portion of the software which implements a location counting algorithm (M3) or which simply designates a specific memory location (M4) are associated with the undesired failure. In the software design (prior to coding) developers may create a mathematical counting algorithm for subsequent implementation during repetitive functions. Alternatively, the design may also employ specific address identification in the retrieve and compare modules. Both these approaches would have to be identified as critical features and identified with a /c/ on associated documentation. Thus, any text-based algorithm development documents and resulting specifications would be marked with a /c/ and a note that the algorithm is associated with the failure event. Subsequent code implementation would then place some header information indicating that the module contains critical implementations associated with critical failure, as well as comment information at the actual source code implementation lines. Figure 6 indicates how this might be applied in the design and implementation.
In conducting this sort of in-depth code analysis, the code can be broken into templates according to the semantics of the programming language. Analyzers and team members can then review the logic structure of the software to detect software logic errors, even before the onset of formal testing. Since the entire software package is not generally deemed critical, the in-depth analysis can focus on those portions which can have adverse impact. As time and resources permit, other less critical sections can be subjected to analysis as well. This natural allocation of project efforts to the critical areas can be coupled with other analysis methods, such as Pareto analysis, to identify areas of principal concern, and to assure that resources and efforts are focused on items of high potential payback.
The next portion of the process focuses on the assignment of probabilities or desired confidence levels in the analysis breakdown. As mentioned earlier, software is typically considered today to fail with Probability 1, since complexities generally have prohibited more detailed assessments. Consider again for a moment the example illustrated in Figure 5. Figure 5 identifies four possible failure mechanisms which could
cause the undesired failure. Since initial data may not exist to set limits otherwise, let us assume that each possible mechanism could occur with equal likelihood. Then each of the four events would have a 25% probability of occurring, if the undesired failure is considered to occur with probability 1. As data are collected regarding occurrences of particular failure types, these probability estimates will become more realistic, and this is an area ripe for future research and study. For this simplistic example, the design team may decide they will not implement any specific location calls for the memory content comparison. The inherent “reliability” of the software against this particular failure type would thus increase, since the likelihood of failure from three of the four possible mechanisms is reduced to 75%. Further, as mitigating actions are taken in the software design and implementation, it may be possible to demonstrate the likelihood of the occurrence of base events M1, M2, and M3. By demonstrably, we mean through either testing, determining that the path is physically impossible, or other means. As this concept is extended to a failure type that has perhaps 100 ways of being implemented (all with equal likelihood) then a design which employed only one of them would have at most only a 10% chance of failure and 90% chance of success. Thus, as this kind of approach is adopted and refined, assignment of more understandable and meaningful approaches to software reliability and associated confidence may be possible.
At this point in the development of the system, the results of the analysis for both the hardware and software portions are merged and combined into the surety document for the overall component, subsystem, or system. The surety document describes the safety requirements and the environments in which those requirements must be met. The document also lists the event identifiers, the title of the event as listed on the fault tree, the parent event, and failure order number. The failure order number indicates whether events must occur singly or in combination (AND gates) with other events in order for the undesired failure to occur. In addition, this document is revised and expanded until it ultimately contains information regarding the necessary control requirements, rationale and background behind the selected design implementation, reasons for its being identified as a failure enabling design feature, actions taken to mitigate adverse consequences of failure, analysis and test reports generated during the validation of the design (next section), references to product (hardware and software) drawings, and any ongoing acceptance criteria. The design itself may be verified by doing any number of boundary and tolerance studies, material analyses, test plans and results, or other studies or simulations. This
document is iterated as needed to ultimately provide complete documentation regarding the analyses and verifications conducted.
As the analyses conducted indicate those areas of principle concern in assuring that a particular failure mode does not occur, the team’s attention turns to software level validation development testing in order to assure that all identified critical features have been appropriately validated. This can be done through formal inspections and walkthroughs (these may be conducted and phased in along with any of the analyses). A detailed test plan is then developed, focusing on both verification and validation of the software. As discussed in the Sandia Software Guidelines, validation activities are geared towards assuring the requirements are met, while verification activities are aimed at showing that the design logic and structure are adequate and cohesive and that the design has been properly and correctly implemented. Team members focus on how much testing for completeness should be done based on the potential consequences of the undesired events and those techniques employed to mitigate the identified failure methods. The testing plans are implemented and the results documented; both are generally included in the surety document.
Wherever possible and sensible, tests are conducted at the lowest level so that software gets examined for errors independently of the hardware. Many tests, however, will need to be combined hardware/software tests. These joint hardware and software tests are developed and documented in joint hardware/software (HW/SW) validation plans and testing. Once again, test plans are refined as necessary and the results documented and generally included in the surety document. As documentation grows, the team may also decide to compile test results (or other documentation) into a separate document. As with the documentation of the software design, each test of the critical software elements are identified with event number and /c/ annotation.
Although software may be thought of as finished after the completion of testing, many software products may require mass production, such as is the case when the product is supplied in diskette form, or when it is placed in Read-Only Memory (ROM) devices. The team will need to explore what acceptance criteria will be applied to these devices to assure that the correct code has been provided. They may even wish to consider periodic refresher loads of the code into the manufacturing equipment, or periodic inspections of integrated circuit masks. Such maintenance acceptability requirements for mass production are compiled and documented. Once again, /c/ notations are used to identify activities associated with the event numbers they are intended to address.
As these previous tasks are undertaken and tests conducted, it may become necessary to revise either the event tree or the software design and implementation. In addition, it could be determined that costs for conducting particular tests are too high and that the tradeoff in reliability and surety improvement do not justify the costs, so that a different design approach would be needed. In such cases, changes are incorporated into the appropriate form of documentation. Critical software features are reviewed throughout the documentation to assure the inclusion of the /c/ notation. In this way, the /c/ documentation is finalized and the comprehensive documentation package compiled.
As identified earlier in the determination of the verification and acceptability requirements, ongoing monitoring of acceptability of the integrated hardware/software component or system may be needed. Tests could be conducted which contribute new information from new test paths to the statistical basis for any allocated software failure probabilities. Such evaluations are conducted and documented as needed throughout the life of the product.
As the details regarding the fault analysis, the verification activities undertaken and implemented, the options pursued during development, the possible concerns if critical software is changed, and testing results unfold during the development phase, these results would be incorporated into the surety document for the particular system or subsystem. The surety document will then provide both evaluators and system maintenance staff with the necessary information to assure ongoing confidence in the product. To be of maximum value, this document, along with the other forms of software and system documentation, will require updating as changes warrant.
**Future Steps**
As previously noted, the original project intent was to apply this methodology to the Weight and Leak Check System (WALS), an automated nuclear weapon component handling system, and look for refinements to the process as needed. Redirection of the High Consequence System Surety project precluded this from happening at this time. Such an application will need to be identified and explored to determine what full potential benefits could be derived from the use of this process. The application selected should be one in which the process can be applied from the conceptual stages on, not one in which the software development is in progress. One of the distinguishing aspects of this methodology is that it is applied before software code has been developed so that potential pitfalls in development may be avoided. The trial project will need to be committed to avoid natural tendencies and delay actual code implementation so that this methodology and any benefits can be fully realized.
It seems apparent from this work and from our engineering discussions that this process can be readily applied to a simple software example. However, its utility and manageability in a more complex application needs to be investigated. It may be that the most significant benefit will be derived from the early application of the process in the conceptual stages, and that detailed application to the software design and code implementation slows project pace and counteracts additional benefits. Only a study of its trial application will reveal lessons such as these for further use of the process.
Complete development of the associated /c/ documentation scheme will also need attention as the approach is applied to a trial project. Details such as standard notes and comments will need to be worked out, as will examples of the various necessary forms for software documentation. A special focus is needed to be sure that all forms of software documentation are intuitively linked to each other through the /c/ approach.
As also mentioned, the potential for failure/reliability allocations to software other than the traditionally used "1" was revealed. This potential needs considerable study to determine its long term possibilities and impacts. This, and other as yet indeterminate approaches, could provide new opportunities in considering whether software is inherently characterizable. Once again, however, the utility of the approach will need to balance the potential benefits. Even with the advent of today's high power information systems, it should nonetheless be possible to begin collecting data for typical fault mechanisms so that characterization could become more determinate.
**Summary and Conclusions**
The work undertaken in this effort explored the extension of using the existing safety analysis and development methodology for the design and development of stronglink safing devices into the software world. Initial discussions and the resulting approach clearly demonstrate that the principles involved in the hardware process are directly
applicable to a software project. A simplistic software example was explored showing those thoughts and concepts necessary for successful utilization of the new approach.
As the methodology unfolded it also revealed the need for coordinating joint hardware/software discussions to ensure that "unwritten assumptions" about the functionality and design implementation approaches are not carried by either the hardware or software teams. Natural points in the process were identified for this coordination of communication, although more interactive teaming should definitely be encouraged throughout the development process.
Further application of the analysis approach points to new possibilities regarding the quantification of probabilistic determinations for software. Routine application of such analyses to identified critical portions of the software is more plausible than exhaustive application of rigorous analysis and assessments throughout the software. Further exploration can reveal the long range utility and potential for characterization.
The approach explored in this paper illustrates how a software product could be reviewed and analyzed before the creation of any software code to identify those portions which are critical in assuring the absence of an undesired high consequence event. In fact, it is this a priori approach to analysis that would facilitate error prevention rather than detection by methods used after code implementation. This method for analysis can couple easily with other structured approaches to yield software products whose critical functions are isolated in smaller, more manageable software entities or modules. This compartmentalization can result in software modules of sufficiently small complexity that complete exhaustive testing can be possible when necessary. For example, if we were to isolate the portion of a software product which actually sends the radiation dosage to the patient into a small module with only four executable paths and two boundary conditions, then we might expect to be able to test for correctness 100%. Thus, the method provides a basis for making decisions to allocate resources and focus testing and analysis efforts first onto the portions of the software which have the highest judged potential for creating the most adverse conditions. As project schedule and budget permit, more in-depth testing could be applied to the remaining, non-high consequence modules.
The documentation scheme suggested is intended to provide a comprehensive snapshot of both the software product, the design approach implemented, and the verification and validation activities undertaken to demonstrate that the implementation is indeed error free. This documentation provides future project maintenance staff (after the original team members have left) with the basis for the implementation decisions made, a definition of the potential implications of changing the approach, and a clearer understanding of what testing may be required to demonstrate the acceptability of a contemplated change. Instead of a new team member wondering why such a "strange coding approach" had been chosen and "improving" it with a "correction" which results in severe consequences, the new individual can review the rationale and implications to perhaps avoid making a change without full knowledge and verification that surety requirements are still being met.
While preliminary indications are that this approach can provide significant gains in mitigating the occurrence of high consequence events in software, they are only preliminary. Further investigation is needed to determine both the long range utility and potential benefits of the approach. It should be further noted that even with this approach, software suitability of any high-consequence purpose is not guaranteed. However, by having more detailed, up-front information, engineers and designers will have better supporting documentation for the approach selected.
Acknowledgments
The authors wish to acknowledge the special contributions to this work from Scott Nicolaysen, Bill Greenwood, Carl Vanecek, and Doug Gehmlich. These individuals provided valuable feedback regarding the mechanical process applied at the component level and the systems level approach to the safety theme. Additional contributions were received from Ed Fronczak regarding his work in conducting fault tree analysis of software code in security-critical applications. The efforts of Howard Kimberly, Mike Eckley, and Larry Dalton were also invaluable in reviewing and refining the software process as it evolved. Special thanks to Louis Hernandez for his help in creating the process flow diagrams and to Gary Randall in editing this report.
Bibliography
Distribution
1 MS0319 Ray Leuenberger, 2645
1 MS0319 Scott Nicolaysen, 2645
1 MS0319 Bill Greenwood, 2645
1 MS0319 Carl Vanecek, 2645
5 MS0319 Gary Randall, 2645
1 MS0329 Ruben Urenda, 2643
1 MS0329 Ken Varga, 2643
1 MS0405 Todd Jones, 12333
1 MS0431 Sam Varnado, 9400
1 MS0451 Sharon Fletcher, 9411
1 MS0458 Bill McCulloch, 12333
1 MS0458 Laura Gilliom, 5133
1 MS0484 Roxie Jansma, 9415
1 MS0484 Judy Moore, 9415
1 MS0486 Stan Kawka, 2122
1 MS0487 John Franklin, 2122
1 MS0492 Mark Ekman, 12324
1 MS0507 Kathleen McCaughey, 9700
1 MS0535 Larry Dalton, 2615
1 MS0503 Steve Giles, 2335
1 MS0535 Mike Eckley, 2615
1 MS0535 Laney Kidd, 2615
1 MS0560 Paul Longmire, 2106
1 MS0627 Ed Fronczak, 12334
1 MS0637 Joe Chiu, 12336
1 MS0856 Stu Rogers, 14308
1 MS0638 Mike Blackledge, 12326
1 MS0660 Margaret Olson, 9622
5 MS0535 Pat Tempel, 2615
1 MS0661 Sue Bodily, 4816
1 MS0661 Louis Hernandez, 4816
1 MS0746 Jim Campbell, 6613
1 MS0746 Maria Armendariz, 6613
1 MS0747 Heather Schriner, 6412
1 MS0759 Bill Paulus, 5845
1 MS0759 Mark Snell, 5845
1 MS0762 Sabina Jordan, 5861
1 MS0769 Dennis Miyoshi, 5800
1 MS0801 Melissa Murphy, 4900
5 MS0812 Sharon Trauth, 4923
1 MS0830 Elmer Collins, 12335
1 MS0830 Tom Kerschen, 12335
1 MS0833 Johnny Biffle, 9103
1 MS0977 Dave Darsey, 9416
1 MS1006 Bill Drotning, 96711
1 MS1007 Howard Kimberly, 9672
1 MS9036 Doug Gehmlieh, 2254
1 MS9214 Len Napolitano, 8117
2 MS0100 Document Processing, 7613-2
For DOE/OSTI
1 MS0619 Print Media, 12615
5 MS0899 Technical Library, 4414
1 MS9018 Central Technical Files, 8523-2
|
{"Source-Url": "https://digital.library.unt.edu/ark:/67531/metadc628099/m2/1/high_res_d/162500.pdf", "len_cl100k_base": 11659, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 63772, "total-output-tokens": 13370, "length": "2e13", "weborganizer": {"__label__adult": 0.0003132820129394531, "__label__art_design": 0.0003840923309326172, "__label__crime_law": 0.00033783912658691406, "__label__education_jobs": 0.0008311271667480469, "__label__entertainment": 5.143880844116211e-05, "__label__fashion_beauty": 0.00014734268188476562, "__label__finance_business": 0.00024580955505371094, "__label__food_dining": 0.00028228759765625, "__label__games": 0.0006632804870605469, "__label__hardware": 0.0027751922607421875, "__label__health": 0.00034236907958984375, "__label__history": 0.00020241737365722656, "__label__home_hobbies": 0.00011301040649414062, "__label__industrial": 0.0006198883056640625, "__label__literature": 0.00020015239715576172, "__label__politics": 0.00015306472778320312, "__label__religion": 0.00033164024353027344, "__label__science_tech": 0.0257720947265625, "__label__social_life": 5.519390106201172e-05, "__label__software": 0.007720947265625, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.0002114772796630859, "__label__transportation": 0.0004773139953613281, "__label__travel": 0.0001571178436279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65210, 0.07532]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65210, 0.48098]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65210, 0.93296]], "google_gemma-3-12b-it_contains_pii": [[0, 299, false], [299, 299, null], [299, 447, null], [447, 1450, null], [1450, 3305, null], [3305, 4764, null], [4764, 8602, null], [8602, 8649, null], [8649, 8991, null], [8991, 9376, null], [9376, 13037, null], [13037, 13148, null], [13148, 13937, null], [13937, 14415, null], [14415, 18598, null], [18598, 22803, null], [22803, 26770, null], [26770, 26970, null], [26970, 29171, null], [29171, 33279, null], [33279, 37226, null], [37226, 41178, null], [41178, 43307, null], [43307, 47305, null], [47305, 50182, null], [50182, 54110, null], [54110, 57854, null], [57854, 61839, null], [61839, 63666, null], [63666, 65210, null]], "google_gemma-3-12b-it_is_public_document": [[0, 299, true], [299, 299, null], [299, 447, null], [447, 1450, null], [1450, 3305, null], [3305, 4764, null], [4764, 8602, null], [8602, 8649, null], [8649, 8991, null], [8991, 9376, null], [9376, 13037, null], [13037, 13148, null], [13148, 13937, null], [13937, 14415, null], [14415, 18598, null], [18598, 22803, null], [22803, 26770, null], [26770, 26970, null], [26970, 29171, null], [29171, 33279, null], [33279, 37226, null], [37226, 41178, null], [41178, 43307, null], [43307, 47305, null], [47305, 50182, null], [50182, 54110, null], [54110, 57854, null], [57854, 61839, null], [61839, 63666, null], [63666, 65210, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65210, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65210, null]], "pdf_page_numbers": [[0, 299, 1], [299, 299, 2], [299, 447, 3], [447, 1450, 4], [1450, 3305, 5], [3305, 4764, 6], [4764, 8602, 7], [8602, 8649, 8], [8649, 8991, 9], [8991, 9376, 10], [9376, 13037, 11], [13037, 13148, 12], [13148, 13937, 13], [13937, 14415, 14], [14415, 18598, 15], [18598, 22803, 16], [22803, 26770, 17], [26770, 26970, 18], [26970, 29171, 19], [29171, 33279, 20], [33279, 37226, 21], [37226, 41178, 22], [41178, 43307, 23], [43307, 47305, 24], [47305, 50182, 25], [50182, 54110, 26], [54110, 57854, 27], [57854, 61839, 28], [61839, 63666, 29], [63666, 65210, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65210, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
bdded5410ce48df4684c847db0c9f4250bd1f33f
|
Maik Thiele, Jens Albrecht, Wolfgang Lehner
**Optimistic Coarse-Grained Cache Semantics for Data Marts**
Erstveröffentlichung in / First published in:
*18th International Conference on Scientific and Statistical Database Management (SSDBM’06)*. Vienna, 03.-05.07.2006. IEEE, S. 311-321. ISBN 0-7695-2590-3
DOI: [https://doi.org/10.1109/SSDBM.2006.38](https://doi.org/10.1109/SSDBM.2006.38)
Diese Version ist verfügbar / This version is available on:
[https://nbn-resolving.org/urn:nbn:de:bsz:14-qucosa2-788403](https://nbn-resolving.org/urn:nbn:de:bsz:14-qucosa2-788403)
Optimistic Coarse-Grained Cache Semantics for Data Marts
Maik Thiele
Dresden University of Technology
01069, Dresden
amaik.thiele@inf.tu-dresden.de
Jens Albrecht
GfK Marketing Services
90319, Nuremberg
jens.albrecht@gfk.com
Wolfgang Lehner
Dresden University of Technology
01069, Dresden
lehner@inf.tu-dresden.de
Abstract
Data marts and caching are two closely related concepts in the domain of multi-dimensional data. Both store pre-computed data to provide fast response times for complex OLAP queries, and for both it must be guaranteed that every query can be completely processed. However, they differ extremely in their update behaviour which we utilise to build a specific data mart extended by cache semantics. In this paper, we introduce a novel cache exploitation concept for data marts – coarse-grained caching – in which the containedness check for a multi-dimensional query is done through the comparison of the expected and the actual cardinalities. Therefore, we subdivide the multi-dimensional data into coarse partitions, the so called cubletets, which allow to specify the completeness criteria for incoming queries. We show that during query processing, the completeness check is done with no additional costs.
1. Introduction
Data marts storing pre-aggregated data, prepared for further roll-ups, play an essential role in data warehouse environments and lead to significant performance gains in the query evaluation. Since the number of all possible group-bys increases exponentially with the number of dimensions, it is usually impossible to precompute the whole cube. Furthermore, not every group-by is semantical reasonable and therefore, a computation is not necessary. For this reason, data marts contain only a subset of all possible group-bys derived from the knowledge about the well-known workload. Anyway, it must be revisable for every query on the data mart whether these could be answered completely or not what leads to an analogy between data mart and cache.
Data Mart vs. Cache The notion of data marts as precomputed summarized tables which provide fast access to pre-aggregated data is very similar to the cache idea, which says that frequently used tuples are held under the assumption of locality. The most important issue in such a system is: can the query be computed from the cache/data mart effectively, and is this lookup fast enough, so that the lookup costs do not exceed the time required to bypass the cache? Other aspects of cache systems such as invalidation or replacement lose importance in the domain of data marts, where the time period between updates can be as long as a few days or even weeks (Figure 1). Furthermore, data mart updates are restricted to append operations, i.e. data gets never be replaced and is valid for the whole data mart life time. This is different to cache systems where the initial load is relative small and is getting replaced over time. Nevertheless it
must be guaranteed that every query on the data mart can be processed completely.
**Contribution of This Paper** In this paper we introduce a novel concept for aggregate data processing called coarse-grained caching. The idea is to decompose all aggregates into coarse partitions of the multi-dimensional space, the cubelets. A cubelet is a container for all aggregates at any aggregation level for a certain partition of the data. In detail we make following contributions:
1. Based on the distinction between classification and feature attributes we propose a novel partitioning scheme for multi-dimensional data sets, dividing the data in so called cubelets.
2. The partitioning into cubelets enables us to define a simple completeness criterion. We provide an optimistic algorithm, which computes the query from the cache and does the completeness check in a single step with no additional cost. Only if the completeness check fails, the missing parts of the data are identified, computed and stored in the respective cubelets. A second computation from the cache will definitely give the correct result. The optimistic approach is extremely efficient if the cache can be filled in advance for a specific workload. As a prerequisite, we assume that also null aggregates are materialized in the data mart.
3. We evaluate the scalability of our cache enabled data mart using real and synthetic datasets.
**Organization of This Paper** The paper is organized as follows: Section 2 introduces the notion of cubelets and formally defines completeness in the domain of multi-dimensional data. Section 3 illustrates the query processing infrastructure. Experimental study is reported in Section 4. Finally, we present our experimental results in Section 5 and conclude in Section 6.
## 2 Partitioning the Multi-Dimensional Data Space
The coarse-grained cache concept relies on special characteristics of the multi-dimensional data model, which are described in the following section.
### 2.1 The Cubelet Partitioning Schema
The generally known multi-dimensional data cube is stretched by orthogonal dimensions, which can be further divided into classification and feature attributes. Classification attributes $CA_i$ ($i = 0, ..., n$) define a hierarchy of dimensional elements and are ordered according to their functional dependencies. Instances of classification attributes are denoted as classification nodes $N$, e.g. ‘DVD’ as instance of the classification attribute ‘product family’ (see Figure 2). The highest classification attribute $CA_0 = TOP$ with the single instance ‘ALL’ is member of each dimensional structure. Aside the product dimension, we extend our ongoing example by a second shop dimension with the classification hierarchy (Region, Country, State, City, ShopId). Feature attributes $FA_i$ ($i = 0, ..., n$) of a dimension represent all descriptive features or properties of the dimensional elements (Figure 2). In contrast to classification attributes, a feature attribute does not functionally determine any other feature attribute. Some feature attributes may be valid for all dimensional elements, whereas other feature attributes exist only on certain nodes of the classification hierarchy [8]. For example, properties like ‘color’ or ‘brand’ are valid for all dimensional elements in the product dimension, whereas properties like ‘screen size’ or ‘resolution’ may be only valid for the classification node ‘TV’. To simplify our descriptions, we restrict our analysis to feature attributes which occur in each dimensional element within a dimension, e.g. ‘color’ or ‘brand’.
The granularity and range of data within the multi-dimensional structure is defined by the partitioning scheme $P$ denoting the granularity, and a partitioning descriptor $PD$ specifying the range, i.e. the selection criteria:
**Partitioning Schema:** A partitioning schema $P$ is an $n$-tuple $(CA_1, ..., CA_n)$, where each element is a classification attribute, prefixed with the dimension identifier, e.g. “Product.Family” or “Shops.Country”.
**Partitioning Descriptor:** A partitioning descriptor $PD$ is an $n$-tuple $(N_1, ..., N_n)$, where each $N_i$ is a node of the classification level described by the elements of $P$.

If the partitioning schema $P$ solely consists of classification attributes at the lowest granularity, e.g. $P = (P.ProductId, S.ShopId)$, it is referred to as raw partitioning schema $P_{R}$. In the two-dimensional context stretched by the product and shops dimension, ("DVD", "Germany") is a valid partitioning descriptor for the partitioning schema (Product.Group, Shops.Country).
### 2.2 Cubelets
With regard to our cache concept, the multi-dimensional data as well as the queries are represented as cubelets. A cubelet $C$ is a triple $(P, PD, FA)$, where $P$ describes the cubelet granularity, and where $PD$ denotes the cubelet context, which can be further segmented by a set of feature attributes $FA$. Thus, the cubelet which can be seen as a data container decomposes the multi-dimensional space into low-dimensional coarse partitions, which reduces the complexity making it easier to manage the multi-dimensional space. Although a cubelet has a very simple structure, it can contain data at any dimensionality specified by a nested set of feature attributes. Cubelets which share the same partitioning schema $P$ can be combined to a cubelet set $S$. The partitioning schema of a cubelet set $S$ is denoted as set partitioning schema $P_S$.
$$S = \{C_1, ..., C_n \mid \forall i P_i = P_S\}$$
As the aggregates in the data mart all have the same defined granularity according to the classification attributes, we denote this partitioning schema as global partitioning schema $P_G$. Based on that, we distinguish between two types of cubelets: cubelets sharing the global partitioning schema $P_G$ and cubelets having an arbitrary partitioning schema $P$. For our ongoing example, we assume $P_G = (Product.Group, Shops.Country)$. The raw data from which the precomputed aggregates and thus the cubelets are derived is partitioned by the raw partitioning schema $P_R$. The appropriate cubelet sharing $P_R$ is defined as follows:
Raw Cubelets: A raw cubelet $C_R = (P_R, PD, FA)$ shares the raw partitioning schema and is part of the raw cubelet set $S_{raw}$ (Figure 3a).
The data mart partitioned by partitioning schema $P_G$ consists of total cubelets and data cubelets. Total cubelets represent the general availability of the data at the global partitioning granularity, whereas data cubelets represent the data mart content itself, i.e. the precomputed aggregates.
Total Cubelets: A total cubelet $C_T = (P_G, PD, \emptyset)$ shares the global partitioning schema $P_G$. It consists of one cell $A = \{0|1\}$, which denotes the availability of the data cubelet with the same partitioning descriptor, i.e. $PD_{C_T} = PD_{C_Q}$ (Figure 3b). Each total cubelet is part of the total cubelet set $S_{total}$.
Data Cubelets: A data cubelet $C_D = (P_G, PD, FA)$ shares the global partitioning schema $P_G$. It spans a set of cells holding either a numerical value, a null value denoting that no data is available in the cell context, or the value n.a. (not available) indicating that the cell state is unknown, i.e. is not computed. For each data cubelet $C_D$, there exists one total cubelet $C_T$, whereas $PD_{C_T} = PD_{C_Q}$ (Figure 3c). Each data cubelet is part of the data cubelet set $S_{Data}$.
Queries on cubelets are represented as cubelets as well. Cardinality cubelets are derived from a set of total cubelets and define the expected cardinality, i.e. the completeness condition. Query cubelets holding the query result are checked against this condition to verify the completeness.
Cardinality Cubelets: A cardinality cubelet $C_C = (P_C, PD_C, \emptyset)$ consists of one cell which denotes the number of total cubelets addressed by partitioning schema $P_C$ and partitioning descriptor $PD$, whereas $P_C \geq P_G$ (Figure 3c).
Query Cubelets: A query cubelet $C_Q = (P_Q, PD_Q, FA_Q)$ is derived from a set of data cubelets $C_D$, holding the query result and the cardinality of each cell (Figure 3d), whereas $P_Q \geq P_G$. For each query cubelet $C_Q$, there exists one cardinality cubelet $C_C$ with $P_Q = P_C$ and $PD_Q = PD_C$.
The granularity of the data mart content specified by the global partitioning schema $P_G$ defines the lower limit for all queries on the data mart. This means that no query cubelet can be answered if at least one classification attribute of $P_Q$ has a lower granularity than the corresponding classification attribute in the global partitioning schema. After having provided an idea of how to partition the multi-dimensional data mart and queries into different types of cubelets, the next section introduces basic cubelet operations that bear special significance for the completeness requirements.
### 2.3 Operators on Cubelets
In this section, basic cubelet operations are defined to sketch the use of cubelets in broad terms and to facilitate a subsequent description of the query processing.
- The roll-up operator corresponds to an aggregation process where at least one attribute of the new partitioning schema is "coarser" than the attributes of the partitioning schema of the source cubelet set. The op-
erator can only be applied to cubelet sets. The result of a roll-up is always a single cubelet.
\[ C' := \{ P, PD \} S, \text{ where } P \geq P_S \]
- The **equalize operator** replaces the partitioning schema of a cubelet or rather a cubelet set with the global partitioning schema \( P_G \). The equalize operator applied to a cubelet coarser than the global partitioning schema results in a cubelet set.
\[ S' := \downarrow S \text{ or } S' := \downarrow C, \text{ iff } P_C > P_G \]
- The **collapse operator** removes all feature attributes from a cubelet \( C \), thus decrementing the dimensionality of the cubelet to the dimensionality of the partitioning schema.
\[ C' := \leftarrow C := (P, PD, \emptyset) \]
- The **compare operator** checks two cubelets, a total cubelet \( C_T \) and a query cubelet \( C_Q \), for different cardinality values in the same cell context. The result of the compare operator is a set of cubelets denoting the feature attribute context with differing cardinalities:
\[ S_{\Delta} := \Delta(C_T, C_Q) \]
By applying the roll-up operator to a set of cubelets, the cubelet cells aggregated according to a specified aggregation function. To simplify further descriptions, we restrict our analysis to the \( SUM() \) and \( COUNT() \) aggregation functions, e.g. \( C' := \downarrow SUM(P, PD) S \). Since \( COUNT() \) is independent of the aggregation type of an attribute ("flow," "stock" or a "value-per-unit"), it can always be applied in combination with other aggregation functions [9], e.g. \( C' := \downarrow \{ SUM, COUNT\}(P, PD) S \). Other functions like \( AVG() \), \( MIN() \) and \( MAX() \) work as well but are beyond the scope of this paper.
Aside from the operators on cubelets we also need operators for cubelet sets, more precisely **join** and **minus** operators similar to the natural join and minus in the relational algebra. Both operators can solely be applied to cubelet sets sharing the global partitioning schema. For two given cubelet sets \( S \) and \( S' \), where \( P_G = P_S = P_{S'} \), the operators are defined as following:
- The result of the **join operator** is the set of all combinations of cubelets in \( S \) and \( S' \) that are equal in their partitioning description.
\[ S'' := \uplus (S, S'), C \subseteq S \land C' \subseteq S' \]
\[ S'' \supseteq C'' := (P_G, PD_C, FA_C \cap FA_C'), \text{ if } PD_C = PD_C' \]
The result of the minus operator is a set of those cubelets that hold the value not available (n.a.) in a feature value combination of \( S' \) but exist in the corresponding context of \( S \).
\[
S'' := -(S, S'), \quad \text{where} \quad PD_C = PD_{C'} \land FA_C = FA_{C'}
\]
### 2.4 Cubelet Completeness Specification
In order to verify the completeness of a data cubelet, the cardinality metric is introduced. In general, the cardinality is the number of data cubelet cells in a set of cubelets addressed by a partitioning schema \( P \), a partitioning descriptor \( PD \) as well as a set of feature attributes \( FA \). Thus, the cardinality is an implicit result of the roll-up operator specified above. Depending on the cubelets the roll-up is applied to, we can distinguish between two types of cardinalities.
**Expected Cardinality:** The expected cardinality stored in a cardinality cubelet \( C_C \) consisting of one cell is the number of total cubelets addressed by \( C_C \), \( C_C := \lceil \text{COUNT}(P_C, PD_C)S_{\text{Data}} \rceil \).
**Actual Cardinality:** The actual cardinality of a query cubelet cell is the number of cells from the underlying data cubelets which either hold a numerical or a null value, \( C_Q := \lceil \text{COUNT}(P_Q, PD_Q)S_{\text{Data}} \rceil \). This means that the actual cardinality is incremented by
\[
\begin{cases}
1 & \text{if the cell is not n.a.} \\
0 & \text{if the cell is n.a.}
\end{cases}
\]
The cardinality cubelet is derived from low-dimensional total cubelets, whereas the query cubelet is derived from high-dimensional data cubelets. But since the roll-up operates solely on classification attributes, which are the same for data and total cubelets, the comparison of both cardinalities leads to the completeness definition.
**Completeness:** A query cubelet \( C_Q \) addressing a set of data cubelets is answered completely if the actual cardinality for each cell of \( C_Q \) is the same as the expected cardinality in \( C_C \), whereas \( C_C =\leftarrow C_Q \).
To illustrate the previous definitions, the next section presents a detailed example.
### 2.5 Example
The schema in Figure 3 (p. 4) illustrates our further descriptions. Consider a global partitioning schema \( P := (S.COUNTRY, P.GROUP) \) for the total cubelet set \( S_{\text{Total}} \) and the data cubelet set \( S_{\text{Data}} \). The data cubelet set consists of three cubelets \( (GER, TV), (GB, TV) \) and \( (FR, TV) \), which are further segmented by a set of feature attributes \( FA := \{ \text{color}, \text{brand} \} \) (Figure 3e). The numerical values in the data cubelet cells denote the sales of the product group television in the appropriate countries Germany, Great Britain and France. These sales values are subdivided into the colors "black" and "silver," and the brands "Sony" and "Aiwa." For each data cubelet, there exists one total cubelet without the additional feature attributes (Figure 3b). The total cubelet set is derived from the raw cubelet set by replacing the raw partitioning schema with the global partitioning schema \( P_G \) and by removing the feature attributes \( S_{\text{Total}} := \lceil \lceil S_{\text{Raw}} \rceil \rceil \).
The data cubelet in context \((GER, TV)\) holds a null value for cell \{black, Aiwa\}. This means that not a single black Aiwa TV set was sold in Germany. In contrast, the cell \{black, Aiwa\} in cubelet \((FR, TV)\) is "not available" (n.a.), which means that this cell has not been computed and therefore, its value is unknown.
A query cubelet \( C_Q \) with the partitioning schema \((S.REGION, P.GROUP)\) and partition descriptor \((EU4, TV)\) is specified (Figure 3e). The value 'EU4' is the fusion of the four different countries Germany, Great Britain, France and Spain. The corresponding cardinality cubelet is defined as \( C_C := \leftarrow C_Q \) (Figure 3d). The expected cardinality is the number of total cubelets addressed by the cardinality cubelet. For our example, the expected cardinality is 3 since no total cubelet exists for Spain. That means, no sales data is available in the raw data for that country. This is an important observation: The expected cardinality is not just the Cartesian product of the requested classification attribute values, which would be 4 for our example. It denotes the general availability of data at the granularity of the global partitioning schema.
The actual cardinalities, together with the summarized sales values, are derived from the data cubelet set \( S_{\text{Data}} \).
\[
C_Q := \{ \text{SUM, COUNT} \}(P.REGION, EU4)S_{\text{Data}}
\]
To compare both cubelets, we apply the compare operator \( S_{\Delta} := \Delta(C_T, C_Q) \), which leads to the cubelet \{black, Aiwa\}, which has a cardinality of only 2. To identify the missing cubelet cell, the coarse cubelet set \( S_{\Delta} \) is broken down to the global partitioning schema and joined with the total cubelet set \( S_{\text{Total}} \).
\[
S_{\text{Ref}} := \approx (\lceil S_{\Delta}, S_{\text{Total}} \rceil)
\]
The resulting cubelet set denotes the cubelets which should be available. A minus with the data cubelet set leads to the missing cubelet which must be computed to answer the query completely.
### 2.6 The Semantics Of Null Cells
It is essential for our cardinality comparison to distinguish between null and n.a. cells, denoting the difference
of an aggregate value which is computed but null and an aggregate combination which is not existent within the data mart, i.e. the value is unknown. This semantics implies that resulting null values must be stored explicitly and cannot be omitted. The knowledge of the non-existence of a cell is a necessary and valuable piece of information, required to increment the actual cardinality of a derived query cubelet. Otherwise – with a very high probability – the actual cardinality would be always smaller than the expected cardinality since null values occur quite often in sparse multidimensional data. If the null value for the example from the last section would not be stored, the actual cardinality for the corresponding cell would by 2 instead of 3.
3 Designing the Data Mart Cache
In the previous section we formally introduced our cubelet partitioning schema. In this section we want to utilize the cubelet idea and illustrate the cache infrastructure as well as the query processing workflow.
3.1 Query Processing Infrastructure
A cache as well as a data mart are collections of duplicated data, where the original values would be expensive to compute relative to reading the cache or the data mart. In contrast to a cache a data mart is barely limited according to the available storage space. This means that a data mart can be pre-filled with the typical workload so it can answer the majority of all queries. Nevertheless it must be guaranteed that a query is answered completely, i.e. is computed on raw data in case that data was missing in the cache enabled data mart. This can be achieved by applying the cubelet idea as illustrated below.
Global Partitioning Schema The most important step during the setup of the cache enabled data mart is the specification of the global partitioning schema. As mentioned in section 2.1 the global partitioning schema consists of a set of attributes associated to a classification hierarchy, in other words each attribute which belongs to a classification hierarchy is part of the global partitioning schema. The granularity of the global partitioning schema influences the query workload which can be answered by the data mart as well as the query processing time. A high granularity restricts the workload to a few “coarse” queries which can be answered very efficiently, whereas a low granularity is more flexible with respect to the workload but requires more processing resources. Since the typical workload is well-known in the most applications the global partitioning schema can be specified according to that points. As in the previous section we consider a global partitioning schema \( P_G = \langle Product.Group, Shops.Country \rangle \).
Aggregate and Publish Table The design of the query processing infrastructure is based on the notion of dividing the multi-dimensional space into coarse uniform partitions along the classification hierarchy, the cubelets. According to the separation into data and total cubelets, the aggregates are stored into the so-called aggregate table and the availability state is stored in the publish table (see Figure 4 for an example). The feature attributes for the aggregates are stored separately in the feature set table that is linked from the aggregate table by the foreign-key featureid. Each feature set consists of a set of tuples \( FA_i = v_i \), whereas each tuple represents a feature value pair, e.g. \( \text{color} = \text{'silver'} \). These data structures reduce the storage space of the aggregate table since the most cubelets share the same feature sets. For example the feature set \( \text{color} = \text{'silver'} \) occurs in almost every cubelet independent of the values for product group and country.
The availability of the raw data is stored in the publish table which is initial filled with following query.

As an ongoing example, we consider a query \( Q \) which asks for the aggregate sales of the product groups ‘DVD’ and ‘TV’ in two regions ‘EU2’ and ‘EU5’ denoting federations of two and five European countries; additional divided by three different feature sets (see feature table in Figure 4):
```sql
SELECT s.region, a.group, SUM(sales) FROM AGG a, Shop s WHERE a.country = s.country AND s.region IN (EU2, EU5) AND a.featureid IN (2, 3, 5) GROUP BY s.region, a.group, a.featureid
```
An overview of the query processing can be seen in Figure 5. For each query $Q$ which should be answered using the cache, the selection and group-by attributes need to be analyzed to determine the expected cardinality for each cubelet, i.e. the number of aggregates, required to answer the query. From query $Q$, a new query $Q'$ is derived by replacing the aggregate table in the FROM clause with the publish table and replacing the occurrence of all feature attribute predicates in the SELECT and GROUP – BY clause. Furthermore the aggregation function is replaced with a $\text{COUNT}(*)$. This query is executed and the results, the expected cardinality for each cubelet, are temporarily stored in a table expcard_temp (step 1 in Figure 5).
SELECT s.region, p.group, COUNT(*)
FROM PUBLISH p, SHOP s
WHERE p.country = s.country
AND s.region IN (EU2,EU5)
GROUP BY s.region, p.group
For our example, the expected cardinalities are 2 and 5 respectively. Next, a $\text{COUNT}(*)$ is added to the SELECT clause of the original query $Q$. This query is executed to obtain the query result and the cardinalities for each derived aggregate (step 2). The additional $\text{COUNT}(*)$ does not lead to any measurable extra costs for the query processing (see section 4.1). Again the query result is temporarily stored in a table called query_temp.
In the third step, the cardinalities of the result aggregates are compared to the expected cardinalities by joining both tables over the classification attributes, e.g. region and group. Each join tuple which differs in the cardinalities denotes that an aggregate is missing in the appropriate cubelet (this corresponds to the compare operator in section 2.3).
SELECT q.*, e.card
FROM query_temp q, expcard_temp e
WHERE q.productgroup= e.productgroup
AND q.region = e.region
AND q.card <> e.card
If the result of that join is empty, the query is completely answered and the process stops. Otherwise, the missing aggregates need to be identified and computed. For the ongoing example, the result tuple ('EU5', 'TV', 5) is incomplete since the cardinality is 4 instead of 5. This example was chosen to demonstrate the whole query processing workflow. Since the data mart content is oriented on the typical workload, the cardinality check will be positive for the majority of the queries and therefore the query processing will be finished after the third step with a high probability.
The incomplete coarse aggregate ('EU5', 'TV', 5) must be decomposed into the cubelet granularity, group and country, to identify the one missing base aggregate. This is done by joining that incomplete aggregate with the product and shop dimensions to determine the global partitioning schema of the aggregate table, e.g. group and country. Furthermore, we need a join to the publish table to avoid those tuple which are not available anyway. For the example, we get 5 aggregate tuples which we denote as expected aggregates, i.e. all the aggregates which must be available to answer the query.
Then, we determine all base aggregate tuples, which are actual available, i.e. the existing aggregates (step 4). Therefore, we join the incomplete aggregate ('EU5','TV',5) with the appropriate total cubelets in the publish table and the dimension tables to resolve the global partitioning schema, i.e. group and country. For the example, we get 4 aggregate tuples.
To obtain only the missing base aggregates, we perform a $\text{MINUS}$ operation between both sets, the expected aggregates and the existing aggregates.
The resulting tuples are the missing base aggregates; in our scenario that is exactly one. These missing aggregates must be computed on the raw data (step 5) and merged into the data mart cache, more precisely in the aggregate table (step 6). Finally, the whole cache exploitation process with query \( Q \) must be repeated, since in the meantime the cache might have been updated. But different to the first run the second run will be finished guaranteed after step.
4 Performance Analysis
In this section, we show the results of experiments to validate our scalability and compression expectations. Both real and synthetic datasets were used in the experiments. The synthetic dataset satisfies Zipf distribution (skew = 1.5) and consists of 1.000.000 aggregates. The real dataset records market research data containing 2.853.234 tuples.
4.1 Scalability
The first set of experiments studies the scalability of the coarse-grained cache concept. Therefore, we generated a set of queries which could be completely answered just by using the data mart. Figure 6a illustrates the scalability of our cache concept as the number of aggregates increases from 100 thousand to 2.5 million with a query requesting 200 tuples. Figure 6b shows similar characteristics, as the runtime increases with the number of requested tuples. Both figures show that the runtime goes up linearly as the aggregate table size as well as the number of requested aggregates increases, i.e. coarse-grained caching is scalable with respect to the number of aggregates and the number of requested tuples.
4.2 Completeness Check
A second set of experiments studies the cost of the completeness check consisting of the computation of the expected cardinality, the additional \( \text{COUNT}(*) \) to determine the actual cardinality and the comparison of both values. The expected cardinality is computed using the publish table holding the availability of data at the granularity of the global partitioning schema. Compared to the aggregate table the dimensionality of the publish table is very low which is reflected in the size of both tables. To determine the completeness for the 2.8 Mio aggregates of the real data set only 1.019 records need to be stored in the publish table. Furthermore, the size of the publish table is fixed whereas the aggregate table grows over time. The size of the publish table strongly depends on the global partitioning schema which is specified by a set of classification attributes at a specified granularity. The higher the number of classification attributes and the lower the granularity, the more records need to be stored. Nevertheless the number of classification attributes is much lower than the number of feature attributes so the processing time of the publish table requests can be neglected in contrast to the overall runtime.
Furthermore, we measured the impact of the \( \text{COUNT}(*) \) which must be added to each query performed on the data mart. Table 1 illustrates the average runtimes for five different queries evaluated 50 times on the real dataset. The result was that the additional \( \text{COUNT}(*) \) led to an insignificant runtime overhead of 1.7 % against queries without the \( \text{COUNT}(*) \).
<table>
<thead>
<tr>
<th>Query</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>with count(*)</td>
<td>350</td>
<td>462</td>
<td>308</td>
<td>584</td>
<td>648</td>
</tr>
<tr>
<td>without</td>
<td>321</td>
<td>455</td>
<td>299</td>
<td>518</td>
<td>640</td>
</tr>
</tbody>
</table>
Table 1. Runtime Comparison of Queries with and without \( \text{COUNT}(*) \)
To verify the completeness of a query the expected and actual cardinality need to be compared. Therefore, the table with the expected cardinality for each coarse aggregate is joined with the query result including the actual cardinalities for each tuple. Through the low dimensionality of the global partitioning schema, the amount of tuples, i.e. the amount of cardinality cubelets derived from the publish table, is very low too. So, this join benefits significantly from the hash join operator, which builds an in-memory hash table of the smaller of the two relations. That can be seen in Figure 6b where the cardinality comparison does not influence the overall query processing for increasing requests.
4.3 Evaluation
Since the computation and comparison of the expected and actual cardinalities does not increase query runtimes significantly, our completeness check is almost for free. That means, our coarse-grained cache concept achieves its best performance if all requested tuples are already materialized in the data mart. This is the case for the majority of all queries, since the data mart content is oriented on the typical workload. Because of our optimistic approach we get the query result with the additional completeness verification at almost the same costs as the query processing itself.
In a last experiment, we measured the influence of missing aggregates on scalability. With the increase of missing aggregates, which must be computed on raw data, the overall runtime goes up linearly (see Figure 6c). But once more,
the benefit for the majority of all queries which can be answered in one run strongly exceeds the runtime overhead generated by a few queries.
5 Related Work
The general problem of answering queries using view has been studied extensively in [6] [10]. Answering queries with aggregations using views has been studied in [11]. A fundamental problem is the query rewriting which, can be proved, is NP-complete [7] [10]. Finding rewritings for aggregate queries introduces additional sources for complexity compared to conjunctive queries without aggregation [1] [2]. Materialized views have the drawback of providing preaggregated data at fixed levels, implying that only a certain class of aggregate queries profits from the preaggregation. Within a cubelet we are able to hold aggregates at different levels. Furthermore, we avoid the rewrite mechanism since the containedness check is done on instance level by counting and comparing cardinalities.
DCache [3] uses cache groups and introduces the notion of cache key constraints and referential cache constraints to ensure value and domain completeness. Once these constraints are specified by the DBAs, DCache can asynchronously populate the cache tables on demand. However, DCache is inapplicable for caching aggregates and the use of cache constraints in the multi-dimensional model can easily lead to huge amounts of data which has to be loaded into the cache database.
Caching scheme specific to OLAP applications is proposed in [5]. They decompose the multi-dimensional space into chunks. For incoming queries, the required chunks are computed and split into two subsets: cached chunks and not cached chunks. To answer the query, the system will compute the missing chunks from the raw data. This approach was further extended in [4] considering also chunks at different aggregation levels. The chunk based caching is quite similar to our approach. Both split the multi-dimensional data into uniform semantic data regions which is very natural in the OLAP domain. But instead of storing the cache data in chunked multi-dimensional arrays, we use the notion of partitions to define the completeness criteria for all resulting aggregates of a query.
To reuse the results of former queries, summarizability of aggregates as the units of caching is a necessary prerequisite which was not mentioned in detail in this paper. For definitions on derivability see the work of [9].
6 Summary and Outlook
We have introduced a novel cache exploitation framework for multi-dimensional data which works very well in the data mart domain with its specific update characteristics. A further contribution of this paper is the formal introduction of cubelets that allowed us to define query completeness through the comparison of expected and actual cardinalities. We evaluated the scalability of our framework using real and synthetic datasets. Since the coarse-grained caching framework maintains the relational nature of data it can be easily implemented into existing data mart environments.
As mentioned in section 2.6 it is essential for the cardinality comparison to store null aggregates explicitly. Depending on the density of the raw data, this results to a high amount of null aggregates that need to be stored. First experiments show that null aggregates make up 90% of all aggregates. Due to the special characteristics of null aggregates which do not contribute anything to the result of a query but are necessary to determine the exact cardinalities, they can be stored in an alternative and compact manner. It can be observed that there are dependencies between aggregates of related feature attribute sets. Thus, it is obvious that when an aggregate with the feature ‘color’ and the feature value ‘blue’ has a null value for a specific measure the finer ag-
aggregate 'color=blue, size=large' also must be null for the appropriate measure. So, instead of storing all aggregates assigned with null values, specific aggregates can be identified from which all other null aggregates can be derived whenever it is necessary to determine the exact cardinalities. A first promising prototype achieved a compression ratio of more than 90%. In future work, we plan to integrate these lossless null reduction algorithm in our coarse-grained cache framework.
We are currently working on the finalization of the implementation of our coarse-grained cache for a real-world market research project in cooperation with GfK Marketing Services, Nuremberg.
References
|
{"Source-Url": "https://tud.qucosa.de/api/qucosa%3A78840/attachment/ATT-0/", "len_cl100k_base": 8533, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38833, "total-output-tokens": 9872, "length": "2e13", "weborganizer": {"__label__adult": 0.00040268898010253906, "__label__art_design": 0.00048279762268066406, "__label__crime_law": 0.0006089210510253906, "__label__education_jobs": 0.0016088485717773438, "__label__entertainment": 0.00011992454528808594, "__label__fashion_beauty": 0.00026917457580566406, "__label__finance_business": 0.0018253326416015625, "__label__food_dining": 0.0005784034729003906, "__label__games": 0.0007138252258300781, "__label__hardware": 0.0012006759643554688, "__label__health": 0.0011453628540039062, "__label__history": 0.0004754066467285156, "__label__home_hobbies": 0.0001596212387084961, "__label__industrial": 0.0012874603271484375, "__label__literature": 0.0003862380981445313, "__label__politics": 0.0003893375396728515, "__label__religion": 0.0005021095275878906, "__label__science_tech": 0.372314453125, "__label__social_life": 0.00013113021850585938, "__label__software": 0.039398193359375, "__label__software_dev": 0.57470703125, "__label__sports_fitness": 0.0002567768096923828, "__label__transportation": 0.0007753372192382812, "__label__travel": 0.0003368854522705078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40301, 0.02857]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40301, 0.4961]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40301, 0.87166]], "google_gemma-3-12b-it_contains_pii": [[0, 577, false], [577, 3543, null], [3543, 7873, null], [7873, 12967, null], [12967, 15386, null], [15386, 20805, null], [20805, 25198, null], [25198, 28744, null], [28744, 33791, null], [33791, 37613, null], [37613, 40301, null]], "google_gemma-3-12b-it_is_public_document": [[0, 577, true], [577, 3543, null], [3543, 7873, null], [7873, 12967, null], [12967, 15386, null], [15386, 20805, null], [20805, 25198, null], [25198, 28744, null], [28744, 33791, null], [33791, 37613, null], [37613, 40301, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40301, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40301, null]], "pdf_page_numbers": [[0, 577, 1], [577, 3543, 2], [3543, 7873, 3], [7873, 12967, 4], [12967, 15386, 5], [15386, 20805, 6], [20805, 25198, 7], [25198, 28744, 8], [28744, 33791, 9], [33791, 37613, 10], [37613, 40301, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40301, 0.02353]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
8c8bc1f3d6137e79bc5fa95b9da580abd19a62b5
|
Motivation
In the past, the EDA industry and designers have struggled with the issues of having multiple languages in use for describing, implementing and verifying their designs, such as Verilog and VHDL. This has led to gross inefficiencies in the industry with tool vendors needing to support multiple languages, which are often dissimilar, and in some cases contradictory, and with users having to deal with incompatible library issues. With the industry embarking on the search for new system level languages we already have several languages based on C or C++ that are emerging and the distinct possibility is arising that we will again be faced with language “wars”. In order to prevent this we need to ensure a minimum level of compatibility between them so that it can guaranteed that information could be moved from one language to another without loss of information. It is for this reason that this working group was formed and which the attached document is aimed.
All languages have two major parts. The first of these is the syntax. This is the actual language constructs that are presented to the user. For example this document is written using a specific syntax which defines the character set and the construction rules. This syntax is actually almost completely common to many languages such as English, French, German etc. However, this syntax does not tell us how to interpret what has been written. In order to do this we need the second component of the language, which is its semantics. Semantics contains the rules of interpretation. It allows us to know what is meant by the words. It is in fact possible to write many different syntaxes on top of a set of semantics and it is this fact that provides the basis for this work. In our language analogy, when we speak we are using the same semantic set as in the written form above, but have now substituted a different syntax, which is speech. What we set out to accomplish in this document is a set of semantic definitions that could be used to bind together all of the emerging system level languages. This still allows the language vendors to compete by creating the best tools and syntax for specific functions. To continue our analogy further, when we have a common semantic set and multiple syntaxes, we can choose which ever works best for a particular situation. For example in some cases it is better to distribute a document like this in text form. Other times speech works well, however, we know that we can always translate by reading the text out loud and there is no loss of content.
In the early formative stages of this group, we looked extensively at how these languages may be integrated into design flows. Several points were identified in these flows as being the points at which a user may need to transfer design information from one tool set to another. These are the target points where we need to ensure that the languages concur with the meaning of what is being represented. As the first step we decided to concentrate on the lowest level of transfer which is at the RTL level. This is a point where many existing tools are established in design flows and displacement of these may not be desired. This document is the result of that work and it is hoped will act as a point of unification amongst the providers of these languages. In fact this work is the result of many of those firms who are committed to making this happen.
Review Procedure:
This document is currently in an open review period where anybody is free to examine and to provide feedback on the document. We are doing this because of the importance of this work for the industry, and we want to make sure that everyone in the industry has the ability to ensure compatibility with the systems out there that we may not have considered. All feedback should be sent to brian_bailey@mentor.com and should be received no later than April 13th 2001.
Feedback should be limited to technical issues since it is known that there are many wording and grammatical errors that need to be fixed. This will be done during an extensive reformatting process before the formal release of the document. For all feedback to be considered, it must contain the following:
Full name, affiliation and valid contact information including postal address and phone number. A description of the identified problem and wherever possible the way that we should change the document in order for the reported problem to be solved.
We thank you for your help.
Contributors:
This document is the result of a lot of work, most of which was contributed by a few key members of the group. Those contributors are marked with an asterisk in member list below for the working group.
Co-Chairs
* Brian Bailey - Mentor Graphics
* Dan Gajski – UC Irvine
Members:
* Martin Baynes – Previously of C Level Design
Yuri Panchul - C Level Design
Grant Martin - Cadence
Simon Davidmann - CoDesign Automation
Peter Flake - CoDesign Automation
Paula Menzigian – Coware
* John Sanguinetti - CynApps
* Dave Springer – CynApps
Guy Bois - Ecole Polytechnique de Montreal
Duncan Gurley - Fujitsu WWSLT
Kamal Hasmi – SpiraTech Ltd
Vassilios Gerousis – Infineon
* Asa Ben-Tzur - Intel Corp
Dennis Brophy - Mentor Graphics
Andrew Guyler - Mentor Graphics
Shrenik Mehta - Sun Microsystems
Charles Cheng - Sun Microsystems
Wolfgang Nebel - OFFIS / Oldenburg University
* Kevin Kranen – Synopsys
Frederic Doucet - UC Irvine
Filip Thoen – Virtio
INTRODUCTION
This report is intended to relate RTL theory and practice with RTL modeling and synthesis/simulation issues. RTL semantics must be based on simple implementable models. Semantics defines what RTL model means, which in turn is defined by how RTL design is implemented. In this report we start with generic RTL implementation, called RTL processor, then derive the formal model for such RTL processor, and then describe how to build the systems out of RTL processors. This way we can define semantics for RTL modeling, simulation, and synthesis.
1 RTL PROCESSOR
In order to define the RTL design flow we define the RTL-processor model. Such a model consists of a controller and a datapath. As shown in Figure 1(a), the model has two types of I/O ports. One type of I/O ports are data ports, which are used by the outside environment to send and receive data to and from the model. The data could be of type integer, floating point, characters, bit vector or any other type and any size. The data ports are usually 8, 16, 32 or 64 bits wide and have different types of attributes. The other types of I/O ports are control ports, which are used by the outside environment to receive the information about the status of the model and to send the information about the status of the environment. These two types of ports may be identified in the model definition so that the controller and datapath can be easily synthesized without complex inference procedure or they may be left untyped for synthesis to decide.
As shown in Figure 1(b), the datapath consist of storage units such as registers, register files, and memories, and combinatorial units such as ALUs, multipliers, shifters, and comparators. These units and the input and output ports are connected by

*FIGURE 1(a) RTL Processor*
FIGURE 1(b, c) RTL Processor.
buses. The datapath takes the operand from storage units or input ports, performs the computation in the combinatorial units, and returns the results to storage units or output ports during each state, which is usually equal to one clock cycle.
The selection of operands, operations, and the destination of the result are controlled by the controller by setting proper values of datapath control signals. The datapath also indicates through status signals when a particular value is stored in a particular storage unit or when a particular relation between two data values stored in the datapath is satisfied. The input ports can be connected directly to register or storage units or to any other component in the datapath including the output ports. The output ports could be used for possible connections to other RTL processors through outside buses or directly through point-to-point connection.

(d) Datapath-pipelined register-transfer-level diagram
FIGURE 1(d) RTL Processor
Similar to the datapath, a controller has a set of input and a set of output signals. Each signal is usually but not necessarily a Boolean variable. There are two types of input control signals: external signals and status signals. External signals represent the conditions in the external environment on which the model must respond. The Start signal in one’s counter example in Figure 2(a), which starts the one’s counter, is such an input signal. On the other hand, the status signals represent the state of the datapath. Their value is obtained by comparing values of selected variables stored in the datapath. For example, Data = 0 in one’s counter example is such a signal whose value is equal to 1 when the value of Data is equal to 0 and 0 when Data value is not equal to 0.
There are also two types of output control signals: datapath control signals and external signals. The control signals select the operation for each component in the datapath, while the external signals identify to the environment that the model has reached a certain state or finished a particular computation. A controller consists of state register and next-state and output logic. Next-state logic generates the value for the state register in the next clock cycle while output logic generates the value of control and external signals. If the external control signals depend only on the state of the controller, the controller is called state-based or Moore type controller, and if they also depend on the input signals then the controller is called input-based or Mealy type controller.
Each RTL processor follows this general architecture, although two RTL processors may differ in the number and type of control units and datapaths, the number of components and connections in the datapath, the number of states in the control unit, and the number and type of I/O ports.
A RTL processor may also be pipelined in several different ways:
(a) By inserting latches or registers on control signals and/or status signals, we obtain pipelined control as shown in Figure 1(c). Control registers are usually inserted in the last implementation stage, while status register is frequently used from the beginning. However, status register introduces at least one state delay. In other words, the condition evaluation must be performed one state before it is used, since it is loaded into status register in one state and used in the other. Similarly, the control register introduces one state delay in conditional evaluation.
(b) Datapath can also be pipelined by inserting latches or registers on selected connections, such as after storage elements, before functional units, and after functional units as shown in Figure 1(d). With datapath pipelining the result of register transfers can be used only \( n \) states (clock cycles) later where \( n \) is the number of datapath stages.
(c) Each function unit can be pipeline by dividing it into several stages and inserting latches between the stages as shown in Figure 1(e). The multiply/divide unit is divided into 2 stages. In the case of pipelined units, the result of the operation can be used only \( n \) states later, where \( n \) is the number of the pipelined stage in the functional unit.
2 FSMD DEFINITION
In Section 1, we discussed in general terms the RTL processor model. In this section we discuss how to specify its functionality. We will introduce it on the example of a one’s counter, which counts number of 1s in the word presented at the Inport as shown in Figure 2(a).
The one’s counter is specified by an FSM, representing the control unit and a set of variable assignments representing register transfers in the datapath.
The FSM has eight states and transitions from one state to another under the control of the external signal Start and the status signal \( Data = 0 \). In each state the FSM assigns values to a set of datapath control signal which completely specifies the behavior of the datapath. However, when there are too many control signals it is difficult to realize what and how the datapath will operate. To improve the comprehension of such a specification, we use variable assignment statements to indicate changes in variable values describe in the datapath operation.
A variable assignment statement gives an expression to be used for computation of the new variable value. In each state and for each variable assignment associated with that state the datapath evaluates the expression on the right-hand side of the assignment and assigns it to the variable on the left-hand side of the assignment. Generalizing from the one’s counter specification, we may say that an FSM model with assignment statements added to each state, called an FSM with data, or FSMD, can completely specify the behavior of an arbitrary RTL processor.
In order to define an FSMD formally, we must extend the definition of a FSM by introducing sets of datapath variables, inputs, and outputs that will complement the sets of FSM states, inputs, and outputs. As usually defined, an FSM is a quintuple
\[ \langle S, I, O, f, h \rangle \]
where \( S \) is a set of states, \( I \) and \( O \) are the sets on input and output symbols, and \( f \) and \( h \) are functions that define the next state and the FSM output. More formally, \( f \) and \( h \) are defined as mapping
\[ f : S \times I \rightarrow S \]
\[ h : S \times I \rightarrow O \]
they are usually specified by a table in which the next state and output symbols are given for each state and each input symbol. Each state, input, and output symbol is defined by a cross-product of variables. More precisely,
RTL Semantics
\[ I = A_1 \times A_2 \times \ldots \times A_k \]
\[ S = Q_1 \times Q_2 \times \ldots \times Q_m \]
\[ O = Y_1 \times Y_2 \times \ldots \times Y_n \]
where \( A_i, 1 \leq i \leq k \), is an input signal, \( Q_i, 1 \leq i \leq m \), is the flip-flop output, and \( Y_i, 1 \leq i \leq n \), is an output signal.
To include a datapath, we must extend the FSM definition above by adding the set of datapath variables, input and output ports. More formally we define a variables set
\[ V = V_1 \times V_2 \times \ldots \times V_q \]
which defines the state of the datapath by defining the values of all variables in each state. In the same fashion we can separate the set of FSMD inputs into a set of FSM inputs \( I_C \) and a set of datapath inputs \( I_D \). Thus
\[ I = I_C \times I_D \]
where \( I_C = A_1 \times A_2 \times \ldots \times A_k \) as before and \( I_D = B_1 \times B_2 \times \ldots \times B_p \).
Similarly, the output set consists of FSM outputs \( O_C \) and datapath outputs \( O_D \). In other words,
\[ O = O_C \times O_D \]
where \( O_C = Y_1 \times Y_2 \times \ldots \times Y_n \) as before and \( O_D = Z_1 \times Z_2 \times \ldots \times Z_r \). However, note that \( A_i, Q_i \) and \( Y_k \) usually represent Boolean variables, while \( B_i, V_i \), and \( Z_i \) represent bit-vectors, integers, floating-point numbers, characters and other data types.
Except for very trivial cases, the size of the data-path variables and ports makes specification of function \( f \) and \( h \) in tabular form very difficult. To be able to specify variable values in an efficient and understandable way in the definition of an FSMD, we specify variable values using computable functions defined by mathematical expressions.
We define the set of all possible functions, \( \text{Func} \), over the set of variables \( V \) to be the set of all constants \( K \) of the same type as variables in \( V \), the set of variables \( V \) itself, and all the functions obtained by combining two functions with arithmetic, logic, or rearrangement operations. More formally,
\[
\text{Func}(V) = K \cup V \cup \{ (e_i \ast e_j) \mid e_i, e_j \in \text{Func}, \ast \text{ is an acceptable math operator or a computable function} \},
\]
Using \( \text{Func}(V) \), we can define the values of the status signals as well as transformations in the datapath. Let \( \text{STAT} = \{ stat_k = f_{ij} e_i \Delta e_j \mid e_i, e_j \in \text{Func}(V), \Delta \in \{ \leq, <, =, \neq, >, \geq \} \} \) be the set of all status signals that are described as a Boolean function of one or more relations between variables or functions of variables. Examples of status signals are \( \text{Data} = 0 \), \( (a-b) > (x+y) \), and \( (\text{counter} = 0) \) AND \( (x > 10) \). The relations defining status signals are either true, in which case the status signal has value 1, or false, in which case it has value 0.
With formal definition of functions and relations over a set of variables, we can simplify function $f : (S \times V) \times I \rightarrow S \times V$ by separating it into two parts: $f_C$ and $f_D$. The function $f_C$ defines the next state of control unit,
$$f_C : S \times I_C \times STAT \rightarrow S$$
while the function $f_D$ defines the values of datapath variables in the next state.
$$f_D : S \times V \times I_D \rightarrow V$$
In other words, for each state $S_i \in S$ we compute a new value for each variable $V_j \in V$ in the datapath by evaluating a function $e_j \in Func(V)$. Thus the function $f_D$ is represented by a set of simpler functions, in which each function in the set defines variables values for the state $S_i$:
$$f_D = \{ f_{Di} : V \times I_D \rightarrow :
\{ V_j = e_j \mid V_j \in V, e_j \in Expr(V \times I_D) \} \}$$
In other words, function $f_D$ is decomposed into a set of functions $f_{Di}$, where each $f_{Di}$ assigns one value $e_j$ to each variable $V_j$ in the datapath in state $S_i$. Therefore, new values for all variables in the datapath such that $1 \leq j \leq q$ are computed by evaluating functions $e_j$ for all $j$.
Similarly, we can decompose the output function $h : S \times V \times I \rightarrow O$ into two different functions, $h_C$ and $h_D$ defines the external control outputs $O_C$ as in the definition of an FSM and $h_D$ defines external datapath outputs. Therefore,
$$h_C : S \times I_C \times STAT \rightarrow O_C$$
$$h_D : S \times V \times I_D \rightarrow O_D$$
The above definition of a FSMD can be given in tubular form with a state and output table as shown in Figure 2(b) for the case of the one’s counter. The first three columns define the present state, the next state, and external control outputs, whereas the next two columns define the datapath outputs and variable values. As usual, the symbol $X$ is used for don’t-care conditions. From the table in Figure 2(b) we see that a new value is assigned to each of the control outputs, datapath variables, and datapath outputs in each state. This kind of tubular definition may become awkward to comprehend and manipulate for large FSMDs with many states and hundreds of variables in the datapath.
Fortunately, many of these variables seldom change their values except when they represent pipeline registers or latches. Therefore, it would be more efficient if we assume that variables retain their old values if no new value is specified in a particular state. Therefore, the third, forth and fifth column in Figure 2(b) could be written as a set of assignment statements, reminding us of straight-line code in most programming languages.
Similarly, the second column can be expressed as a set of conditions for transitions to particular states. Such a reduced table is called state-action table as shown in Figure 2(c). It is equivalent syntactically and semantically to the specification in Figure 2(a).
As an example, we show a state-action table for the one’s counter in Figure 2(c). Such a table is easy to understand and provides all the necessary information for the implementation of a control unit and a datapath. It can be used to construct the state diagram for the control unit, synthesize next-state and output logic, and define the datapath components and their connections. It is also very easy to translate such table (Figure 2(c)) or state-diagram (Figure 2(a)) to any of the hardware description languages (such as VHDL or Verilog) or to any programming languages such as C or Java.
RTL Semantics
From our FSMD definition (tubular, graphic or language based) we see that in each state we compute the next state which depends on some condition from the outside environment or on some status signal computed from values of FSMD variables. The FSMD definition also allows assignment of values to any variable in each state. However, variables and functions in the definition of FSMD may have different interpretations which in turn defines several different styles of RTL semantics.

(b) State and output table

(c) State-action table
FIGURE 2(b, c) One’s counter specification
Figure 3(a) shows five (5) different RTL styles for a 4-state segment of a FSMD definition and necessary mappings to arrive to that particular style.
### RTL Semantics
#### STATE TRANSITIONS
<table>
<thead>
<tr>
<th>STATE</th>
<th>UNMAPPED RTL (Style 1)</th>
<th>STORAGE MAPPED RTL (Style 2)</th>
<th>FUNCTION MAPPED RTL (Style 3)</th>
<th>CONNECTION MAPPED RTL (Style 5)</th>
<th>EXPOSE CONTROL RTL (Style 5)</th>
<th>STRUCTURAL RTL</th>
</tr>
</thead>
<tbody>
<tr>
<td>S1</td>
<td>a = d * e</td>
<td>r1 = f^+ (r1,M(0))</td>
<td>r1 = FU2(*,r1,M(0))</td>
<td>Bus1 = r1</td>
<td>C1 = 1</td>
<td></td>
</tr>
<tr>
<td></td>
<td>b = f4( ( )</td>
<td>r2 = f4 ( . . .)</td>
<td>r2 = . . .</td>
<td>Bus2 = M(0)</td>
<td>C2 = 0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>c = . . .</td>
<td>M(0) = . . .</td>
<td>M(0) = . . .</td>
<td>Bus3 = FU2(*,Bus1,Bus2)</td>
<td>C3 = 1</td>
<td></td>
</tr>
<tr>
<td></td>
<td>(3(a,b,c)</td>
<td></td>
<td>r1 = Bus3</td>
<td>Bus1 = r1</td>
<td>C4 = 0</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Bus2 = r2</td>
<td>C5 = 1</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Bus3 = FU1 (-,Bus1,Bus2)</td>
<td>C6 = x</td>
<td></td>
</tr>
<tr>
<td>S2</td>
<td>d = f4 (a,b)</td>
<td>r1 = . . .</td>
<td>r1 = . . .</td>
<td></td>
<td>C7 = 0</td>
<td></td>
</tr>
<tr>
<td></td>
<td>e = f5 (c,d,e)</td>
<td>M(0) = . . .</td>
<td>M(0) = . . .</td>
<td></td>
<td>C8 = 0</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>r1 = Bus3</td>
<td></td>
<td>C9 = 0</td>
<td></td>
</tr>
<tr>
<td>S3</td>
<td>f = a - b</td>
<td>r1 = f-(r1, r2)</td>
<td>r1 = FU1(-,r1,r2)</td>
<td>Bus1 = r1</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>g = f#( . . .)</td>
<td>r2 = . . .</td>
<td>M(1) = r1</td>
<td>Bus2 = r2</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>h = . . .</td>
<td>M(1) = . . .</td>
<td></td>
<td>Bus3 = FU1 (-,Bus1,Bus2)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>r1 = Bus3</td>
<td>Bus1 = r1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>S4</td>
<td>i = . . .</td>
<td>M(2) = . . .</td>
<td>M(2) = . . .</td>
<td>Bus2 = r2</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>j = . . .</td>
<td>r2 = . . .</td>
<td>r2 = . . .</td>
<td>Bus3 = FU1 (-,Bus1,Bus2)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>k = . . .</td>
<td>r1 = . . .</td>
<td>r1 = . . .</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
#### MAPPINGS
- **Storage mappings**
- \{a,d,f,k\} = r1
- \{b,h,i\} = r2
- \{c,e\} = M(0)
- \{g\} = M(1)
- \{i\} = M(2)
- **Function mappings**
- \{f-, f+\} = FU1
- \{f^+, f#\} = FU2
- \{f3,f4,f5\} = FU3
- **Connection mappings**
- r1_to_FU1L = Bus1
- r1_to_FU2L = Bus1
- r2_to_FU1R = Bus2
- M(0)_to_FU2R = Bus2
- M(0)_to_r1 = Bus3
- FU1_to_r1 = Bus3
- FU2_to_r1 = Bus3
- **Control mappings**
- r1_to_Bus1 = C1
- r2_to_Bus2 = C2
- M_to_Bus2 = C3
- FU1_to_Bus3 = C4
- FU2_to_Bus3 = C5
- FU1 = C6
- FU2 = C7
- M(address) = C8
- M(read/write) = C9
### Figure 3(a)
An incomplete example
---
**Copyright 2001 © Accellera**
All Rights Reserved
### 2.1 Unmapped RTL (Style 1)
In the Unmapped RTL the variables are divided into ports and internal variables, while ports are further divided into control and data ports, where each could be an input, output or input-output port. Unmapped RTL only specifies in each state the change of values for some variables. The order in which assignments are executed is determined by control dependencies, that is, the order written in the description. States, transitions and assignment statements are in no way related to any implementation. The variables do not represent registers or busses and functions or operations do not represent any functional units.
The Unmapped RTL is equivalent to the programming language code with exception that such code is divided into states, with conditional transition between states added to the code. For example, we see several assignment statements in each state in Figure 3(a). All statements assign values to uninterpreted variables. The values are computed by using standard language operators or functions if values require more complex computation. It is assumed that each operator or a function is computed in one clock cycle or less.
### 3 Mapped RTL
In the mapped RTL, the uninterpreted variables are mapped into storage units or wires/buses and computing functions or operators are assigned to functional units. Although this mapping can be performed in any order, it is convenient to map variables into storage first, followed by function mapping and then mapping of wires to buses. This way we can define three styles of mapped RTL.
#### 3.1 Storage-mapped RTL (Style 2)
The variables in Style 1 can be of two types. One type are variables whose value is used in the same state in which that value is assigned. These variables represent wires. The other type are variables whose values are assigned in one state and used in another state. The states between the value assignment and its last usage define the lifetime of each variable. These variables must be mapped to storage units such as register, register files, and memories. Thus, storage-mapped RTL is a RTL description in which the second type of variables with non-overlapping lifetimes are grouped and assigned to storage units. In other words, a group of internal variables is replaced by a new variable of type storage.
In our example in Figure 3(a), we grouped variables $a$, $d$, $f$ and $k$ and assigned them to register $r1$ while $b$, $h$, and $j$ were assigned to register $r2$. Similarly variables $c$ and $e$ were assigned to memory location $M(0)$ while $g$ and $i$ to memory locations $M(1)$ and $M(2)$. This assignment is shown in storage mapping table at the bottom of Style 1 in Figure 3(a). Note, that in Style 2 we used, functional notation for all the operators for uniformity sake.
**Synthesis Note**: Storage mapping consists of allocating some storage units, grouping variables with non-overlapping lifetimes, assigning them to storage units and replacing variables with storage unit names to which they have been assigned. Note that variables representing wires are not grouped in any way or mapped to any real wires or buses.
Simulation Note: While variables representing wires are assigned values in order written in
the spec, variables representing storage units are assigned values on the next clock event
(rising or falling edge of a clock). Since a clock event also represents transition between
states, the value assigned in the present state can be used only in future states until it is
reassigned again.
3.2 Function-mapped RTL (Style 3)
In Function-mapped RTL, the operators and/or functions with non-overlapping lifetimes are grouped
into functional units, and a control encoding is assigned to each operation in the functional unit.
Therefore, in Style 3 we must identify the operation performed by each function unit in each state.
Style 3 is the same as Style 2 with functions replaced by multi-operation functional units. Note, that
original functions and functions representing functional units use the same syntax.
As we see in Function mapping table in Figure 3(a), we have three functional units: $FU_1$
performing addition and subtraction, $FU_2$ performing multiplication and operation #, and $FU_3$
performing functions $f_3$, $f_4$, and $f_5$. Thus, in Style 3 in Figure 3(a) operators and functions $f_3$, $f_4$, and
$f_5$ are replaced by functions performed by functional units $FU_1$, $FU_2$, and $FU_3$. If a functional unit
needs more than one state to generate result then its input must be the stable through all the states
while its output is loaded only in the last state. Functional unit may have several outputs, each of
which must be clearly declared when assigning a new value to a variable.
Synthesis Note: Function mapping consists of allocating some functional units, grouping
operations and functions with non-overlapping lifetimes into groups assigned to each
functional unit and replacing operators and functions with functions representing functional
units. If a functional unit takes more than one clock cycle to evaluate, then the FSMD model
must be adjusted. Function and storage mapping can be performed in any order.
Simulation Note: If a functional unit takes $n$ states to execute, then the data and control
inputs must hold the same value for $n$ states, while result is loaded or used after $n$ states.
3.3 Connection-mapped RTL (Style 4)
Similarly to Style 2, the variables, with non-overlapping life times, that represent wires as well as
inputs and outputs to and from storage elements and functional units are grouped and assigned to
busses. Syntactically, there is no difference between wires and buses. The only difference is in
additional bus drivers that must be inserted in Style 5. Similarly, we can merge (multiplex) ports if they
are not used at the same time.
We see from Connection mapping table in Figure 3(a), that connections from register $r1$ to $FU_1$ and
$FU_2$ are assigned to Bus1, connections from register $r2$ to $FU_1$ and memory $M$ to $FU_2$ to Bus2, while
connections from $FU_1$ and $FU_2$ to register $r1$ are assigned to Bus3. The refinement from Style 3 to
Style 4 is similar to compiling a programming language into assembly language. Each assignment
statement is decomposed into statements representing transfers from storage elements to functional
units, functional units operation, and transfers back from functional units to storage elements. For
demonstration purposes, this decomposition is performed only for the first assignment statement in
states $S1$ and $S3$ in Figure 3(a).
Synthesis Note: Each register transfer (such as $x = a + b$) must be expanded with variables
representing wires that connect storage units (such as $a$, $b$) to functional units (such as $+$) and
output of functional units (such as $+$) to storage units (such as $x$). Once the wire variables are
introduced they can be grouped and assigned to buses. Bus is just another variable. Since only one source can be assigned to each variable in each state the problem of multiple drivers is avoided.
**Simulation Note**: No need for resolution function as per note above.

**FIGURE 3(b)** An incomplete example
### 3.4 Exposed-control RTL (Style 5)
In Exposed-control RTL, the FSMD model consists of two parts: netlist of datapath components and a controller that assign a constant to each control variable in each state. The control variables specify the operation of each storage, functional or bus component in the datapath.
Control mappings to perform the Style 4 assignments in Figure 3(a) is shown in control mapping table. Thus, the transfers and operations are replaced by assignments to control signals for all storage, functional and bus units. The Style 5 column shows the control assignments for two statements given in Style 4. Also, the partial design corresponding to these two statements is shown in Figure 3(b).
The datapath netlist consists of declared components (storage, functional and connection) and two types of variable assignments, component ports to wires or buses and buses to ports. Note that more than two ports, can be assigned to each bus, but not in the same state. This situation requires that each port has a tristate driver that is allowed to drive the bus when its corresponding control signal is asserted. Similarly, two or more buses can be assigned to the same port requiring insertion of a selector. Such a selector (multiplexer) is controlled by corresponding control signal from the controller.
Synthesis Note: The refinement from Style 4 to Style 5 consists of extracting all the storage and functional units and connecting them with wires and busses from Style 4, thus forming the netlist for the datapath. Furthermore, all the register transfer statements in each state are omitted and replaced with assignments of constants to control variables that control the storage and functional units and busses. If control register is added, then all dependencies must be checked to accommodate an extra state delay. If that is not the case an extra states must be inserted to satisfy dependencies.
Simulation Note: Since control variables are wires, they are instantly assigned the values. The datapath is the netlist whose component models are run concurrently and sensitive to changes in control wires. In case of control register, they are sensitive to events on control register outputs.
General Note 1: All styles have the same syntactic rules. Semantically they differ in types of variables:
(a) uninterpreted (Style 1)
(b) storage (Style 2, 3, 4)
(c) wires (Style 2, 3, 4)
(d) buses (Style 4)
(e) control (Style 5)
(f) special (clock, reset) (all Styles)
These types of variables are necessary so that proper implementation can be synthesized from any of the styles.
General Note 2: All variable types may be mixed in any style. However, synthesis algorithms for the final implementation in this case may be more complex.
General Note 3: All mappings can be performed in any order including also partial mappings at any step.
General Note 4: Control and Datapath pipelining introduces pipelined registers, which are considered to be equivalent to any other registers in the datapath. However, the RTL description must be checked (manually or automatically) that data and control dependencies are satisfied. In other words, the result of an operation or conditional evaluation can be used only \(n\) states later where \(n\) is the number of pipelined registers between the source and destination.
In case of pipelined functional units the pipelined registers do not have to be declared as long as it is understood that destination register is loaded \(n\) states after assignment state and the result used in the states after it is loaded into destination register.
The situation is the same in case of multicycle functional units. The only exception is that the assignment statement must be repeated in all state that functional unit is executing so that controller can keep operands at the input ports, functional unit executing the same operation and loading the destination in the last clock cycle.
4 Communicating FSMDs
As we defined in previous action, each FSMD is uniquely defined by a set of ports and a description in any of the styles 1-5. The ports are defined by name: type, data, size, and attributes. Type is usually Boolean, bit-vector, integer, floating-point, character or other user defined type.
Attributes may include electrical, mechanical, simulation, test and synthesis requirements or metrics. The two necessary attributes for synthesis are **setup** and **hold** time for the input ports and **delay** time for the output ports. It is not possible to synthesize a design in which RTL processor is a component without these timing attributes for ports. With such attributes included in the FSMD model we can combine two or more FSMDs as shown in Figure 4 into communicating FSMDs. The input ports can be connected to any component inside the datapath and output port can also be driven by any component in the datapath.
In such a structure of communicating FSMDs, any output ports can be connected to any input ports as long as the type, size, and attributes match or connection is unambiguously specified for non-matching ports. Note that the above definition allows a dataport to be connected to a control port and vice versa. However, in most practical case control ports are connected to control ports and data ports to data ports as shown in Figure 4. Control ports are used for synchronization and data ports for data exchange.
In any FSMD a input port may be connected to any component in the datapath or controller, that is to the input of any storage or functional unit. Similarly, any output port can be driven by the output of any unit in the RTL processor. The above definition allows for the existence of two types of IO paths in each FSMD:
- Combinatorial IO in which the change at the input port will propagate with same delay to the output port
- Sequential path in which there is a storage element on the IO path and the input change will impact the output port in the future states but not in the present state.
A FSMD with only sequential IO paths is called state-based (or Moore FSMD since its controller is a Moore FSM), while FSMD with one or more combinatorial IO paths is called input-based
---
**FIGURE 4(a)** Communicating FSMDs
---
RTL Semantics
Unfortunately, the definition of communicating FSMDs allows creation of combinatorial loops by connecting two or more combinatorial IO paths in two or more different FSMDs serially in a loop. Such a combinatorial loop may lead to oscillation and should be avoided in good designs. A combinatorial loop can be avoided in three different ways:
(a) Using only state-based FSMDs, which will guarantee that no output ports is driven from an input port in any of the FSMDs.
(b) Having at least on register or storage unit in each loop but not necessarily in each IO path (in other words we may use input-based FSMDs in this case).
(c) Having a combinatorial loop but making sure that it never gets used completely in any register transfer (fake loop).
Simulation Note: Communicating FSMDs require special care during simulation, since communicating FSMDs operate in parallel and simulator runs sequentially. In case (a) mentioned above, simulator must assign values to all output ports for all FSMDs before assigning values to all input ports. In this manner the proper operation of communicating FSMDs will be secured. Thus, for each FSMD described by a case statement the following order must be observed:
Case Statement
Compute all register transfers not dependent on input ports.
Assign values to all outputs ports.
FIGURE 4(a) Communicating FSMDs
(or Mealy FSMD since its controller is a Mealy FSM). Figure 4(a) shows a connection of two state-based FSMDs while Figure 4(b) shows similar connection of two input-based FSMDs.
Suspend process until all FSMDs reach this point.
Assign values to all input ports.
Compute the register transfers dependent on input ports.
Compute next state.
The suspension of simulation can be achieved in many different ways. In VHDL it is achieved by introduction of $\Delta$ delay and suspending the process by wait ($\Delta$) statement which moves the simulation of the rest of the FSMD description into the next $\Delta$ time slot. In case (b), simulation must be ordered in such a way that each loop gets evaluated starting from the register output and ends at the register input. This is similar to case (a) with an additional simulation order of FSMDs required for each loop. This can be achieved for example by introducing a sensitivity list for all the input ports participating in the loop. In case (c) simulation is similar to case (b) with an additional check to make sure that the loop never gets exercised by not driving at least one port in the loop for example.
RTL Semantics
(b) Timing diagram
FSMD A
(c) State diagram
VHDL Code for FSMD A
process
begin
wait until clk’event and clk=1;
case State is
when Sa1 =>
... --reg. transfers
DataOut <= Output reg
Ready <= 1;
if(Ack=1) then
State <= Sa2;
else
State <= Sa1;
end if;
when Sa2 =>
... --reg. transfers
DataOut <= X
Ready <= 0;
State <= Sa3;
end process
VHDL Code for FSMD B
process
begin
wait until clk’event and clk=1;
case State is
when Sb0 =>
... --reg. transfers
Ack <= 0;
if(Ready=1) then
State <= Sb1;
else
State <= Sb0;
end if;
when Sb1 =>
... --reg. transfers
Input reg <= DataIn
Ack <= 1;
if(Ready=0) then
State <= Sb2;
else
State <= Sb1;
end if;
when Sb2 =>
... --reg. transfers
Ack <= 0;
if(Ready=0) then
State <= Sb3;
else
State <= Sb2;
end if;
end process
(d) VHDL description
FIGURE 5 (b, c, d) Synchronized FSMDs
4.1 CLOCKING OF COMMUNICATING FSMDs
The same clock signal or different clock signals may drive the communicating FSMDs. In the first case, the delay time of an output port, the setup time of the input port, and delay of the connecting wires must be less than the clock period in case of state-based FSMDs. Otherwise, the delay for any register to register transfer even through several FSMD, must be less than a clock cycle. In the second case, we must make sure that two FSMDs are synchronized during data exchange if the clocks are not multiples of each other. If they are, then the rules for FSMDs with the same clock apply.

In case of two FSMDs with different clock signals (or two input-based FSMDs) we can synchronize the data exchange by using *Ready* and *Ack* signals as shown in Figure 5(a). The *Ready* signal is asserted in state *Sa1* indicating that the data is ready at *DataOut* port as shown in timing and state diagrams in Figures 5(b) and 5(c). When FSMD B recognizes that *Ready* signal is asserted it transitions to state *Sb1* in which it stores the data at *DataIn* port into *Input reg* and asserts *Ack* signal. Asserted *Ack*
removes the data from the \textit{DataOut} port and deasserts \textit{Ready} signal in state \textit{Sa2}. After that, \textit{Ack} gets deasserted in state \textit{Sb2}.
The above protocol, in essence, transformed two input-based FSMDs into two state-based FSMDs for the duration of two states during data exchange, and thus interrupted the feedback loop. For completeness, VHDL code for this protocol is shown in Figure 5(d). Similarly approach is valid also for datapath loops.
5 HIERARCHICAL FSMDs
The definition of an FSMD in section 2 allows for hierarchical composition of FSMDs. Each FSMD can be a component in another FSMD. In other words, an FSMD may implement an arbitrary functional unit or a storage unit. In Figure 6 we show the \textit{One’s counter} used as a functional unit and a \textit{FIFO} queue used as a storage unit. These component FSMDs may need fixed number of states to finish such as a \textit{FIFO} or a variable number of states such as \textit{One’s counter}.
In the first case the \textit{FIFO} output may be used only fixed number of states later. In the second case the component FSMD is synchronized with controller by use of \textit{Start} and \textit{Done} signals. The controller asserts the \textit{Start} signal when the data at the input port is valid and component FSMD asserts the \textit{Done} signal when it is finished so that the data at its output port can be used. Note that One’s counter asserts the \textit{Done} signal and makes \textit{Count} available at its output port for one clock cycle as shown in Figure 2. During that clock cycle the \textit{Count} is loaded into \textit{RF} or another storage element via \textit{Bus3}. In case that \textit{Count} is loaded into \textit{FIFO} and assuming \textit{FIFO} is empty the \textit{Count} value will be available several states later when \textit{FIFO} becomes non-empty. At that time \textit{FIFO} could be read and \textit{Count} value processed further. In case the component and composite FSMDs run at different clock cycle then synchronization with \textit{Start} and \textit{Done} signals must be used as shown in Figure 5. On the other hand any number of FSMDs can be combined together in serial or parallel way to form larger FSMDs as shown in Figure 7.
When connecting FSMDs serially the control output \textit{Done} of one FSMD is connected to \textit{Start} input of the other FSMD. The \textit{Start} input of the first becomes \textit{Start} of the composite and the \textit{Done} signal of the second becomes the \textit{Done} of the composite. Obviously, other control and data ports can be connected arbitrarily, as the specification requires. When connecting FSMDs in parallel fashion as shown in Figure 7(c), we may assume that one FSMD takes the role of the master whose \textit{Start} is the \textit{Start} of the composite and whose \textit{Done} is the \textit{Done} of the composite. The rest of the ports can be connected as described in section 3. Note that \textit{Start} and \textit{Done} are not needed if both FSMDs run the same clock and execution time is deterministic. In some cases the master FSMD may be just reduced to a controller that synchronizes the other FSMDs as shown in Figure 7(d). The FSMD D gets started by the input \textit{Start} signal and starts (not necessarily at the same time) the FSMDs A and C. When C is finished the data is transferred to FSMD B that continues with its execution, which was started by FSMD A. When B is finished it notifies D, which in turn asserts the \textit{Done} signal for the composite.
6 SPECIAL CASES
6.1 FSMD RESETING
Any register or storage element can be reset to any value. The reset is completely independent of clock and any other inputs and overrides the clocked or any other writing of the storage elements. In other words, if reset is asserted, the register or storage element is reset no matter what are the values on
other inputs. Therefore, the FSMD description must make sure that reset input is considered before any other input. There are two types of resets: synchronous and asynchronous.
Synchronous reset occurs on the clock edge and overrides the storage writing from any other input. The VHDL description for two communicating FSMDs from Figure 5 with resetting feature is shown in Figure 8(a).
Asynchronous reset occurs at anytime and overrides the storage writing from any other input similarly to synchronous reset. Figure 8(b) shows VHDL description for two communicating FSMDs for this case. In this case the description must be sensitive to reset as well as clock inputs.
**Simulation Note:** Since reset overrides anything else it must be considered always before any other FSMD actions.
**Synthesis Note:** Reset input must be a special type of control input so that synthesis tools can distinguished it from other inputs and connect it properly to set/reset pins of a register or storage element (note, that VHDL does not type reset and thus it is only simulatable and not synthesizable).
Figures 7(a, b, c, d) Hierarchical FSMD
RTL Semantics
**VHDL Code for FSMD A**
(with synchronous reset)
```vhdl
process
begin
wait until clk'event and clk=1;
if(Reset=1) then
Input_reg <= 0
Output_reg <= 0
Status_reg <= "011"
else
case State is
when Sa1 =>
... --reg. transfers
DataOut <= Output_reg
Ready <= 1;
if(Ack=1) then
State <= Sa2;
else
State <= Sa1;
end if;
when Sa2 =>
... --reg. transfers
DataOut <= X
Ready <= 0;
State <= Sa3;
...
end case
end if
end process
```
**VHDL Code for FSMD B**
(with synchronous reset)
```vhdl
process
begin
wait until clk'event and clk=1;
if(Reset=1) then
Input_reg <= 0
Output_reg <= 0
Status_reg <= "001"
else
case State is
when Sb0 =>
... --reg. transfers
Ack <= 0;
if(Ready=1) then
State <= Sb1;
else
State <= Sb0;
end if;
when Sb1 =>
... --reg. transfers
Input_reg <= DataIn
Ack <= 1;
if(Ready=0) then
State <= Sb2;
else
State <= Sb1;
end if;
when Sb2 =>
... --reg. transfers
Ack <= 0;
State <= Sb3;
...
end case
end if
end process
```
(a) with synchronous reset
*FIGURE 8* VHDL model for FSMD resetting
FIGURE 8 VHDL model for FSMD resetting
**VHDL Code for FSMD A**
(with asynchronous reset)
```vhdl
process
begin
wait until (clk'event and clk=1) or (Reset'event and Reset=1);
if(Reset=1) then
Input_reg <= 0
Output_reg <= 0
Status_reg <= "011"
else
case State is
when Sa1 =>
... --reg. transfers
DataOut <= Output_reg
Ready <= 1;
if(Ack=1) then
State <= Sa2;
else
State <= Sa1;
end if;
when Sa2 =>
... --reg. transfers
DataOut <= X
Ready <= 0;
State <= Sa3;
...
end case
end if
end process
```
**VHDL Code for FSMD B**
(with asynchronous reset)
```vhdl
process
begin
wait until (clk’event and clk=1) or (Reset’event and Reset=1);
if(Reset=1) then
Input_reg <= 0
Output_reg <= 0
Status_reg <= "001"
else
case State is
when Sb0 =>
... --reg. transfers
Ack <= 0;
if(Ready=1) then
State <= Sb1;
else
State <= Sb0;
end if;
when Sb1 =>
... --reg. transfers
Input_reg <= DataIn
Ack <= 1;
if(Ready=0) then
State <= Sb2;
else
State <= Sb1;
end if;
when Sb2 =>
... --reg. transfers
Ack <= 0;
State <= Sb3;
...
end case
end if
end process
```
(b) with asynchronous reset
7 CONCLUSION
This report presented basic concepts in RTL design and attempted to explain RTL methodology. It is the purpose of this report to familiarize readers with basic concepts explaining the pros and cons behind the devices and to serve as the accompanying document for the shorter, more formal document for RTL semantics standard.
8 ACKNOWLEDGEMENT
The authors would like to thank all the members of Accellera working group who contributed their comments to improve the quality of this report. Further we would like to thank graduate students at Center for Embedded Computer Systems who read the document and made many suggestion for its improvement. Particularly, we would like to thank Andreas Gerstlauer and Shuqing Zhao for developing and testing VHDL and SpecC models for examples included in this report.
REFERENCES:
2. Randy H. Katz; Contemporary Logic Design, Benjamin/Cummings, 1994
|
{"Source-Url": "http://www.eda.org/alc-cwg/cwg-open.pdf", "len_cl100k_base": 12519, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 68170, "total-output-tokens": 13619, "length": "2e13", "weborganizer": {"__label__adult": 0.0006833076477050781, "__label__art_design": 0.0013837814331054688, "__label__crime_law": 0.0004680156707763672, "__label__education_jobs": 0.0014543533325195312, "__label__entertainment": 0.00018143653869628904, "__label__fashion_beauty": 0.00033020973205566406, "__label__finance_business": 0.0006957054138183594, "__label__food_dining": 0.0005769729614257812, "__label__games": 0.0016269683837890625, "__label__hardware": 0.055694580078125, "__label__health": 0.0007090568542480469, "__label__history": 0.000701904296875, "__label__home_hobbies": 0.0004487037658691406, "__label__industrial": 0.00351715087890625, "__label__literature": 0.0003979206085205078, "__label__politics": 0.00054168701171875, "__label__religion": 0.0010585784912109375, "__label__science_tech": 0.28955078125, "__label__social_life": 8.291006088256836e-05, "__label__software": 0.0095062255859375, "__label__software_dev": 0.62744140625, "__label__sports_fitness": 0.0006008148193359375, "__label__transportation": 0.001946449279785156, "__label__travel": 0.00036025047302246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52108, 0.02404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52108, 0.79617]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52108, 0.89837]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3433, false], [3433, 4504, null], [4504, 5508, null], [5508, 7352, null], [7352, 7382, null], [7382, 8428, null], [8428, 10005, null], [10005, 13242, null], [13242, 14064, null], [14064, 17000, null], [17000, 20548, null], [20548, 21357, null], [21357, 25167, null], [25167, 28331, null], [28331, 32085, null], [32085, 33744, null], [33744, 36679, null], [36679, 38653, null], [38653, 40199, null], [40199, 41188, null], [41188, 42103, null], [42103, 43276, null], [43276, 47202, null], [47202, 48296, null], [48296, 48336, null], [48336, 49840, null], [49840, 51056, null], [51056, 52108, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3433, true], [3433, 4504, null], [4504, 5508, null], [5508, 7352, null], [7352, 7382, null], [7382, 8428, null], [8428, 10005, null], [10005, 13242, null], [13242, 14064, null], [14064, 17000, null], [17000, 20548, null], [20548, 21357, null], [21357, 25167, null], [25167, 28331, null], [28331, 32085, null], [32085, 33744, null], [33744, 36679, null], [36679, 38653, null], [38653, 40199, null], [40199, 41188, null], [41188, 42103, null], [42103, 43276, null], [43276, 47202, null], [47202, 48296, null], [48296, 48336, null], [48336, 49840, null], [49840, 51056, null], [51056, 52108, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52108, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52108, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3433, 2], [3433, 4504, 3], [4504, 5508, 4], [5508, 7352, 5], [7352, 7382, 6], [7382, 8428, 7], [8428, 10005, 8], [10005, 13242, 9], [13242, 14064, 10], [14064, 17000, 11], [17000, 20548, 12], [20548, 21357, 13], [21357, 25167, 14], [25167, 28331, 15], [28331, 32085, 16], [32085, 33744, 17], [33744, 36679, 18], [36679, 38653, 19], [38653, 40199, 20], [40199, 41188, 21], [41188, 42103, 22], [42103, 43276, 23], [43276, 47202, 24], [47202, 48296, 25], [48296, 48336, 26], [48336, 49840, 27], [49840, 51056, 28], [51056, 52108, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52108, 0.03666]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
dec24472f2384a6f849551e400b441a0b58fadda
|
Security Analysis of Java Web Applications Using String Constraint Analysis
The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Citable link</td>
<td><a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:14398534">http://nrs.harvard.edu/urn-3:HUL.InstRepos:14398534</a></td>
</tr>
<tr>
<td>Terms of Use</td>
<td>This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at <a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA">http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA</a></td>
</tr>
</tbody>
</table>
Security Analysis of Java Web Applications Using String Constraint Analysis
Author: Louis Li
Supervisor: Professor Stephen Chong
A thesis submitted in fulfillment of the requirements for the degree of Bachelor of Arts in Computer Science and Mathematics
April 2015
# Contents
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Contents</td>
<td>1</td>
</tr>
<tr>
<td>Abstract</td>
<td>3</td>
</tr>
<tr>
<td>Acknowledgements</td>
<td>4</td>
</tr>
<tr>
<td><strong>1 Introduction</strong></td>
<td>5</td>
</tr>
<tr>
<td>1.1 Motivation</td>
<td>6</td>
</tr>
<tr>
<td><strong>2 Related work</strong></td>
<td>7</td>
</tr>
<tr>
<td>2.1 String analysis</td>
<td>7</td>
</tr>
<tr>
<td>2.2 Static analysis of web applications</td>
<td>8</td>
</tr>
<tr>
<td>2.3 String solvers</td>
<td>8</td>
</tr>
<tr>
<td><strong>3 Technical Background</strong></td>
<td>9</td>
</tr>
<tr>
<td>3.1 String-based Security Vulnerabilities</td>
<td>9</td>
</tr>
<tr>
<td>3.1.1 SQL Injection</td>
<td>9</td>
</tr>
<tr>
<td>3.1.2 Cross-site scripting</td>
<td>10</td>
</tr>
<tr>
<td>3.1.2.1 Persistent (stored) XSS</td>
<td>10</td>
</tr>
<tr>
<td>3.1.2.2 Reflected XSS</td>
<td>11</td>
</tr>
<tr>
<td>3.2 Java bytecode</td>
<td>12</td>
</tr>
<tr>
<td>3.3 Dataflow analysis</td>
<td>12</td>
</tr>
<tr>
<td>3.3.1 Call graphs</td>
<td>13</td>
</tr>
<tr>
<td>3.3.2 Static single assignment</td>
<td>14</td>
</tr>
<tr>
<td>3.3.3 Intraprocedural analysis</td>
<td>14</td>
</tr>
<tr>
<td>3.3.4 Interprocedural analysis</td>
<td>14</td>
</tr>
<tr>
<td>3.4 Satisfiable modulo theory</td>
<td>15</td>
</tr>
<tr>
<td>3.4.1 String solvers</td>
<td>15</td>
</tr>
<tr>
<td>3.5 Contribution</td>
<td>17</td>
</tr>
<tr>
<td><strong>4 Design</strong></td>
<td>18</td>
</tr>
<tr>
<td>4.1 Concepts</td>
<td>18</td>
</tr>
<tr>
<td>4.1.1 String Variables</td>
<td>18</td>
</tr>
<tr>
<td>4.1.1.1 Local variables</td>
<td>18</td>
</tr>
<tr>
<td>4.1.1.2 Field variables</td>
<td>19</td>
</tr>
<tr>
<td>4.1.1.3 Formal variables</td>
<td>19</td>
</tr>
<tr>
<td>4.1.2 String Constraints</td>
<td>20</td>
</tr>
<tr>
<td>4.2 Constraint Generation</td>
<td>21</td>
</tr>
<tr>
<td>Section</td>
<td>Page</td>
</tr>
<tr>
<td>-----------------------</td>
<td>------</td>
</tr>
<tr>
<td>4.2.1 Intraprocedural Analysis</td>
<td>21</td>
</tr>
<tr>
<td>4.2.1.1 String Library Methods</td>
<td>21</td>
</tr>
<tr>
<td>4.2.1.2 String Builders</td>
<td>22</td>
</tr>
<tr>
<td>4.2.2 Interprocedural Analysis</td>
<td>23</td>
</tr>
<tr>
<td>4.2.2.1 Summary Nodes</td>
<td>23</td>
</tr>
<tr>
<td>4.2.2.2 Handling formals</td>
<td>24</td>
</tr>
<tr>
<td>4.2.2.3 Handling return</td>
<td>24</td>
</tr>
<tr>
<td>4.2.2.4 Example</td>
<td>25</td>
</tr>
<tr>
<td>4.3 Limitations</td>
<td>27</td>
</tr>
<tr>
<td>5 Implementation</td>
<td>29</td>
</tr>
<tr>
<td>5.1 Constraint Analysis</td>
<td>29</td>
</tr>
<tr>
<td>5.2 SMT Solver</td>
<td>29</td>
</tr>
<tr>
<td>5.3 Evaluation</td>
<td>29</td>
</tr>
<tr>
<td>6 Evaluation</td>
<td>30</td>
</tr>
<tr>
<td>6.1 Example programs</td>
<td>30</td>
</tr>
<tr>
<td>6.1.1 SQL Injection</td>
<td>30</td>
</tr>
<tr>
<td>6.1.2 Cross-site scripting</td>
<td>31</td>
</tr>
<tr>
<td>7 Conclusion</td>
<td>33</td>
</tr>
<tr>
<td>Appendices</td>
<td>34</td>
</tr>
<tr>
<td>A Example programs</td>
<td>35</td>
</tr>
<tr>
<td>A.1 SQL Injection</td>
<td>35</td>
</tr>
<tr>
<td>A.2 XSS</td>
<td>36</td>
</tr>
</tbody>
</table>
Abstract
Web applications are exposed to myriad security vulnerabilities related to malicious user string input. In order to detect such vulnerabilities in Java web applications, this project employs string constraint analysis, which approximates the values that a string variable in a program can take on. In string constraint analysis, program analysis generates string constraints – assertions about the relationships between string variables. We design and implement a dataflow analysis for Java programs that generates string constraints and passes those constraints to the CVC4 SMT solver to find a satisfying assignment of string variables. Using example programs, we illustrate the feasibility of the system in detecting certain types of web application vulnerabilities, such as SQL injection and cross-site scripting.
Acknowledgements
I would like to thank my three thesis readers, each having played a crucial role in my undergraduate experience as a computer scientist.
This work would not have been possible without my advisor, Professor Steve Chong. I had always been interested in programming languages since stumbling upon the seemingly esoteric programming language blog, Lambda the Ultimate. Through Steve’s courses and advising, I was able to pursue my curiosity to the fullest. I appreciate the extraordinary amount of attention that he gives to undergraduate researchers and his initiative in uniting undergraduate thesis writers in computer science.
I would have been much worse prepared for the undertaking of a thesis without Professor Krszystof Gajos, who patiently guided me through my first major research project. He has provided me with invaluable advice for both maturing as a researcher – guiding me in the right direction, but always encouraging autonomy – and as an individual – pushing me to find my “superpowers” outside of academia.
Finally, I am grateful to Professor Greg Morrisett, whose undergraduate compilers course provided a crucial foundation for my understanding of dataflow analysis. Many of your students, including me, pick up your passion for the beauty of functional programming.
This project was made possible by the incredible patience of Andrew Johnson. It depended on his work with the Accrue Bytecode analysis framework. More importantly, I am thankful of the time that he took to answer my flood of questioning emails.
I cherish the opportunities provided by the Harvard Computer Science department. The faculty is incredibly supportive, and I believe that a positive undergraduate research experience is within grasp for any student in the department.
Thank you to Ruth Fong for her encouragement and support in my quest to become a computer scientist.
I owe my life to my family, who is forever supportive of my endeavors – especially research. Thank you, Mom, Dad, and Richard.
Chapter 1
Introduction
Web applications are exposed to myriad security vulnerabilities related to malicious user string input. Web servers often accept arbitrary user input through a variety of sources, such as form fields, URL parameters, and cookies. Common examples of such vulnerabilities are SQL injection, where an attacker can manipulate the database, or cross-site scripting, where an attacker can execute arbitrary code in a user’s browser.
These security flaws can potentially be prevented by analyzing the code of the web application beforehand. In order to detect such vulnerabilities in Java web applications, this project employs string constraint analysis, which approximates the values that a string variable in a program can take on. A string constraint asserts a relationship between string variables. For example, consider the following constraint for string-typed variables $x, y, z$. If we have the constraint that $x$ is the concatenation of $y, z$, then we assert that the string values that $x$ could take on is equivalent to the set of string values that $y$ could take on concatenated with the set of string values that $z$ could take on.
If we know what values a string variable can take on at a given point in time, then we may be able to detect a vulnerability. For example, consider a basic application that takes user input and submits it to the SQL server. If our analysis finds that the query string variable fits the form of a valid SQL query that allows the user to arbitrarily manipulate the database, then we conclude that the program contains a SQL injection vulnerability.
We design and implement a dataflow analysis for Java programs that generates string constraints using dataflow analysis. It translates these constraints to the language of a satisfiable modulo theory solver to find a satisfying assignment of the string variables in the program. Using example programs, we illustrate the feasibility of the system in detecting certain types of web application vulnerabilities.
1.1 Motivation
Web applications that store or manipulate user input are at risk of security vulnerabilities. This is extremely common. Many vulnerabilities caused by untrusted data, such as cross-site scripting and SQL injection attacks, are ranked in the top 10 most common web attacks by the Open Web Application Security Project [10].
Ultimately, these security vulnerabilities are caused by developer error in the design of the application. Sensitive strings may be improperly sanitized, allowing users to provide malicious strings that exploit some aspect of the application. Static analysis aims to analyze an application codebase without actually executing code, which can be a convenient way for developers to secure code. This project offers another approach to static analysis, leveraging the capabilities of satisfiable modulo theory solvers to detect potential vulnerabilities.
Chapter 2
Related work
2.1 String analysis
Past work has used string constraint analysis to analyze string expressions of programs and detect security vulnerabilities.
Christensen et al. design an algorithm that associates each string expression with a context-free grammar. The context-free grammar represents the set of strings that can be generated by a string expression [2]. In this case, the constraints are used to represent context-free grammars, which are eventually approximated with regular languages.
The result of this work was released as the Java String Analyzer (JSA), which computes the resulting automata for each string expression in a Java program. AMNESIA, a system for detecting SQL injection attacks in Java web applications, combines JSA string solving with runtime monitoring [5].
Fu et al. describe a formalism for string constraints called Simple Linear String Equations. They implement an algorithm to solve these constraints for Java. This system is packaged as the constraint solver SUSHI. They apply the constraint solver to XSS detection [3].
BEK uses a representation of symbolic finite automata with SMT solvers to develop a language and system for analyzing string sanitization functions, which are often the source of cross-site scripting vulnerabilities [6].
2.2 Static analysis of web applications
Much work has been done in security analysis of web applications using various static analysis techniques. Due to the sheer volume of work done in this area, the projects are not enumerated here.
A particularly relevant project is Framework For Frameworks (F4F). It uses taint analysis – tracking the flow of potentially sensitive information in a program – to support modern Java web application frameworks, such as Java EE and Struts. Similar to this project, it uses parts of the Watson Libraries for Analysis (WALA) framework [15]. A core part of the F4F project is creating an end-to-end system for static analysis of a web application, handling analysis of difficult portions of frameworks such as XML configurations.
2.3 String solvers
Although this work does not directly explore techniques for SMT solvers, theorem proving, and string formulas, it leverages existing work in string solvers. This work uses the string theory capabilities of the SMT solver CVC4 [1]. Support for the string theories was recently added to CVC4, allowing string formulas that assert relations such as string equality, concatenation, length, substring, and set membership [8].
Kaluza, a string solver developed as part of the JavaScript symbolic execution framework Kudzu, uses a constraint language that supports regular expressions, length, and concatenation. The Kaluza string solver uses part of the HAMPI implementation to solve constraints [14].
Similar to CVC4, Z3-str is a project built on top of Microsoft’s existing theorem prover, Z3, allowing it to integrate with logic over other datatypes. The authors apply Z3-str to finding remote code execution vulnerabilities [16].
Other existing string solvers take different approaches to defining and solving string constraints. HAMPI solves string constraints primarily by checking for membership of a string in a context-free grammar [7]. For instance, HAMPI allows the user to define regular and context-free languages and assert membership of a string variable in the language. The authors evaluate the tool on PHP programs containing SQL injection vulnerabilities.
Chapter 3
Technical Background
3.1 String-based Security Vulnerabilities
In this project, we focus on the applications of string solving to two types of vulnerabilities: SQL injection and cross-site scripting.
3.1.1 SQL Injection
SQL injections are string-based vulnerabilities where untrusted input manipulates SQL statements submitted to the database. Because an attacker can construct arbitrary queries, this vulnerability gives the attacker power over the database – selecting sensitive data, modifying existing data, or administrating the database [11].
Example: Classical SQL Injection Consider the following example from the OWASP SQL Injection testing page [13].
Suppose the following query is constructed dynamically with variables $username$ and $password$.
```
SELECT * FROM Users WHERE Username='$username' AND Password='$password'
```
Given a query result set containing multiple users, the server will likely authenticate the user using the first set of matching credentials. Note that if the user provides a username and an incorrect password, the resulting query set will be empty. However, suppose instead that a malicious user chooses to provide the following input:
```
$username = "1' or '1' = '1"
$password = "1' or '1' = '1"
```
Let us examine the resulting query by substituting in the values of the variable:
```sql
SELECT * FROM Users WHERE Username='1' OR '1' = '1'
AND Password='1' OR '1' = '1'
```
Since '1' = '1' is always true, then this SQL statement has the effect of selecting all users from the database – authenticating the attacker as the first user in the resulting query set. Additionally, the first user in the database is often the administrative user.
### 3.1.2 Cross-site scripting
Cross-site scripting (XSS) is a category of web application vulnerabilities where untrusted input allows an attacker to execute arbitrary code on behalf of visiting users to a webpage.
There are two main categories of XSS vulnerabilities: persistent and reflected.
#### 3.1.2.1 Persistent (stored) XSS
In persistent XSS, malicious users provide input that is persisted into the database, such that rendering the input allows the malicious user to run an arbitrary script. Since this information is stored in the server and later rendered in the webpage, this allows an attacker to run scripts on the clients of all future users.
**Example** An example of a persistent XSS vulnerability is unsanitized comments on a blog. Suppose a blog displays a list of comments below each blog post. In the case of a benign user, a comment will likely only contain formatting HTML and text. However, suppose a malicious user submits the following comment:
```html
<script type="text/javascript">alert(1);</script>
```
Since the comments use unescaped HTML, future visitors will view the comment, consequently running the script contained in the body. An `alert` is fairly benign, but an attacker could replace the comment body with arbitrary JavaScript.
Most modern blogs protect against such a straightforward exploit, but this example conveys the basic idea behind persistent XSS: an attacker can store information in the database that will be rendered to future visitors of the page, running a potentially malicious script.
3.1.2.2 Reflected XSS
In reflected XSS, malicious users provide input in a web request that is later rendered onto the page by the server. Potential vectors for this input include URL parameters, form fields, and cookies. This is called reflected XSS, since the input is “reflected” back onto the page by the server’s response, such as an error message or user notification.
In contrast with persistent XSS, where the script will be run to all future visitors of a web page, reflected XSS is frequently delivered to victims through a carefully crafted URL.
Example: Query Parameter Consider the example of a search engine that includes the search query in the URL. For example:
http://searchengine.com/search.php?q=programming
The web page itself will likely display the search query on the user-facing page. However, a malicious user can also craft the following search query:
http://searchengine.com/search.php?q=<script>alert(1);</script>
The user input is reflected in the contents of the webpage. An attacker could send a URL with more malignant code to victims – potentially masked behind a link shortener – where the code would execute upon visiting the link.
Example: Form field Consider the following example from the OWASP testing page, illustrating the ability to run arbitrary JavaScript without using a <script> tag. [12] Consider an HTML form that pre-populates a field with some unsanitized input from the user (INPUT_FROM_USER).
<input type="text" name="state" value="INPUT_FROM_USER">
Suppose an attacker provides the following input:
" onfocus="alert(document.cookie)"
Substituting in the user input, the resulting input field becomes:
<input type="text" name="state" value="" onfocus="alert(document.cookie)">
3.2 Java bytecode
Unlike programming languages like C, where source code compiles directly to assembly, Java source code compiles into Java bytecode. Java bytecode serves as platform-independent instructions for the Java virtual machine.
In this project, we perform our analysis on Java bytecode rather than Java source code. We are only concerned with a subset of Java bytecode instructions – in particular, those dealing with fields, variables, and function calls. To give a broad illustration of the functions of bytecode, summaries of the bytecode instructions relevant to this work are detailed below. Each one has an example of the approximate corresponding scenario in Java source code.
3.3 Dataflow analysis
Dataflow analysis describes analysis performed to determine facts about a program at given points in the program. An example of dataflow analysis is liveness analysis, where, for any given point in the program, the analysis computes the set of live variables – variables whose values may be used in the future.
Dataflow analysis uses a control flow graph of the program. A control flow graph is a directed graph where each node contains a statement. In this case, each node is an instruction of Java bytecode.
In practice, instructions are represented by an intermediate representation of Java bytecode instructions. Intermediate representation refers to a representation of a language using a data structures – useful for dataflow analysis, compilation to a target language, or optimization.
In our analysis, the dataflow analysis is used to compute constraints for mutable string builder variables. This is described in more detail later.
3.3.1 Call graphs
*Pointer analysis* is a type of analysis that determines which memory locations a pointer can point to. In this project, an existing pointer analysis is used to generate a *call graph*. A call graph is a directed graph that captures the relationship between method calls in a program.
A *context-sensitive* call graph distinguishes between different call stacks with which a method may be called. This is more precise than a non-context-sensitive call graph. For example, it differentiates between the case when a same method is called multiple times but with different arguments.
In a context-sensitive analysis, each call graph node consists of two parts: a method and a *calling context*. The contents of a calling context can vary with analysis, but a calling context captures information about a call, potentially distinguishing two different calls of the same method. For example, a simple calling context could contain the line number of the method call, and two calls to a method from different lines of source code would have different calling contexts.
The example below, due to Grove et al. [4], shows a context-sensitive call graph. In the figure, the method `test()` calls `A()`, `B()`, each of which call the method `sumArea()`. However, the subscripts 0, 1 on the nodes denote different calling contexts of the calls to `sumArea()` – one is called from within `A()`, while the other is called from within `B()`.
3.3.2 Static single assignment
Certain intermediate representations satisfy the property of static single assignment. In static single assignment, variables in bytecode can be the left-hand side of an assignment at most once. For example, if the same variable is assigned twice in source code:
\[
\begin{align*}
x & := y \\
u & := x \\
x & := z \\
v & := x \\
\end{align*}
\]
The resulting SSA representation of these instructions will create new variables for each instance of the variable \(x\).
\[
\begin{align*}
x_1 & := y \\
u & := x_1 \\
x_2 & := z \\
v & := x_2 \\
\end{align*}
\]
The intermediate representation used in this project is partial static single assignment. While local variables follow the constraints of static single assignments, the variables representing the field of an object can have multiple assignments.
3.3.3 Intraprocedural analysis
In intraprocedural dataflow analysis, the results of the analysis are derived from a single function. An intraprocedural analysis computes facts by flowing over each bytecode instruction within a method.
When an intraprocedural analysis stands alone, the results of invoking another function are approximated rather than analyzing and using the results of the invoked function. Often, however, an intraprocedural analysis is combined with an interprocedural analysis to compute facts about a whole program.
3.3.4 Interprocedural analysis
While intraprocedural analysis analyzes the contents of a single method, interprocedural dataflow analysis accounts for method invocations. Each time a method is invoked in an intraprocedural analysis, an interprocedural analysis framework triggers another intraprocedural analysis of the invoked method. An interprocedural analysis framework will track the results and compute a fixpoint for facts, handling cases such as recursive functions.
3.4 Satisfiable modulo theory
Satisfiable modulo theories (SMT) are a type of decision problem – a problem that returns a yes or no answer. SMT generalizes the Boolean satisfiability problem (SAT).
In the SAT decision problem, one must determine whether a boolean formula has a satisfying assignment. For example, the formula:
\[(x \land y) \lor (y \land \neg z)\]
has the following satisfying assignment:
\[x = \text{true}, y = \text{true}, z = \text{false}\]
In contrast, SMT generalizes SAT. Boolean variables can be replaced with other predicates, expanding the formulas beyond boolean variables. Each of these types are referred to as theories. An SMT problem can incorporate the theory of real numbers, the theory of lists, and so on. In this work, we are particularly interested in SMT problems employing the theory of strings – solving formulas with string predicates.
3.4.1 String solvers
The term string solver will be used throughout this work, referring specifically to SMT solvers with the capability of solving string formulas. Just as SAT solvers find satisfying boolean assignments for assertions, string solvers find satisfying string assignments.
Consider the following example with string predicates. If we have the following assertions:
\[(s = \langle \text{any string} \rangle) \land (r = \text{“bar”}) \land (t = \text{concat}(s, t)) \land (t \text{ begins with “foo”})\]
A solution that satisfies these formulas would be:
\[s = \text{“foo”}, r = \text{“bar”}, t = \text{“foobar”}\]
However, note that the following would also be a solution:
\[s = \text{“fooooooooooaaa”}, r = \text{“bar”}, t = \text{“foobar”}\]
Given the original formula, a string solver would either return at least one solution or indicate that no solution exists.
3.5 Contribution
The primary contribution of this work is a tool that leverages an existing string solver for solving string constraints for Java bytecode programs, illustrating the feasibility of an end-to-end pipeline for generating and solving string constraints. We built a tool that achieves this through the following components:
1. A dataflow analysis that generates string constraints from Java programs
2. A translation from string constraints to the language of SMT solvers
Chapter 4
Design
In this section, we describe the design of the analysis that generates string constraints from a given Java program.
4.1 Concepts
4.1.1 String Variables
In this project, a string variable represents a string manipulated or computed by the program. Concretely, this encompasses immutable Java String or mutable StringBuilder types. String variables encapsulate two types of variables within a program:
1. Program variables, which represent local variables or object fields.
2. Formal variables, which represent the formal arguments or return values of a method.
Ultimately, the purpose of string variables is to determine the string values that a string variable can take on in the input program. Given string variables, we will declare constraints such as “the string value that sv1 takes on is equal to that of sv2”. The domain of constraints is described in section 4.1.2.
4.1.1.1 Local variables
A local variable refers to any variable declared within a method.
In the example source code below, a string variable would be associated with each of the variables foo, bar:
public void myMethod(String a) {
String foo = a;
String bar = String.concat(a, a);
}
4.1.1.2 Field variables
A field variable is a type of program variable representing a field in a Java object.
In the example source code below, both f0, f1 would be represented in the analysis by field variables. String literals are also represented by string variables, and in the example below, the string "foo" would be represented by a third string variable.
public class Foo {
private String f0;
private static String f1 = "foo";
// ...
}
4.1.1.3 Formal variables
Formal variables are used to summarize information about methods. A formal variable corresponds to either a method argument or a method return value. The high-level concepts are described below, and its use is described in further detail in section 4.2.2.1 on summary nodes.
In the case of method arguments, a formal string variable is associated with each string parameter of a method and its call graph node. For example, given two call graph nodes for the same method, each will have a different formal string variable associated with the same string-typed method parameter.
Consider the following method with string-valued arguments:
public void myMethod(String arg0, int arg1, String arg2) {
// ...
}
There will be a formal string variable associated with the string-valued arguments arg0 and arg2.
In the case of return values, a formal string variable is used to represent the value returned by a method. A string method will return some program variable. This program
variable will be associated with the formal return variable for the method. This association is described in more detail in section 4.2.2. This applies only to methods that return a string type, e.g.:
```java
public String anotherMethod() {
// ...
}
```
### 4.1.2 String Constraints
*String constraints* are used to describe the relationship between string variables. They capture information about the string values that a string variable can take on in the program.
The domain of string constraints generated by this analysis is described below.
We first define the following domains:
- $v \in \text{Var}$: string variables
- $s \in \text{String}$: string literals (e.g., “foo”)
- $m \in \text{CGNode}$: call graph nodes
Let $\sigma$ be a solution to the constraints, having the domain:
$$\sigma : \text{Var} \rightarrow \text{String}$$
$\sigma$ maps string variables to string literals. For example, an explicit instance of $\sigma$ would be:
$$\sigma_0 = \{v_1 \mapsto \text{"foo"}; v_2 \mapsto \text{"bar"}\}$$
For some constraint $C$, we say that the solution $\sigma$ *satisfies the constraint* $C$ if:
$$\sigma \models C$$
The Table 4.1 defines the domain of constraints on the left side. On the right side, it defines how a solution $\sigma$ satisfies the constraint.
\begin{align*}
v &= s \\
v_1 &= v_2 + v_3 \\
v_1 &= v_2 \\
v &= \phi(v_1, \ldots, v_n) \\
m(v_1^n, \ldots, v_1^n) \lor \cdots \lor m(v_1^n, \ldots, v_1^n)
\end{align*}
\sigma \models v = S \iff \sigma(v) = S \quad \sigma \models v_1 = v_2 + v_3 \iff \sigma(v_1) = \sigma(v_2) + \sigma(v_3) \quad \sigma \models v_1 = v_2 \iff \sigma(v_1) = \sigma(v_2) \quad
\sigma \models v = \phi(v_1, \ldots, v_n) \iff \exists i \in \{1, \ldots, n\}. \sigma(v_i) = \sigma(v_i) \\
\sigma \models m(v_1^n, \ldots, v_1^n) \lor \cdots \lor m(v_1^n, \ldots, v_1^n) \iff \exists i \in \{1, \ldots, n\}. \sigma(v_i) = \sigma(v_i) = f_i
de\text{where}
formals(m) = (f_1, \ldots, f_n)
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Constant} & \textbf{Concat} & \textbf{Copy} \\
\hline
\sigma \models v = S & \sigma \models v_1 = v_2 + v_3 & \sigma \models v_1 = v_2 \\
\sigma \models \phi(v_1, \ldots, v_n) & \sigma \models m(v_1^n, \ldots, v_1^n) & \sigma \models \phi(v_1, \ldots, v_n) = f_i \\
\hline
\end{tabular}
\caption{Summary of the constraints used in this project}
\end{table}
\section{4.2 Constraint Generation}
Our string constraint analysis generates string constraints using an interprocedural dataflow analysis on the control flow graph of the program. These constraints are later translated into the language of the string solver, which finds a satisfying solution for the constraints.
String constraints are primarily generated from the program by doing one pass over the control flow graph. However, multiple passes may be required for mutable strings. In order to handle mutable strings (StringBuilder), we use dataflow analysis to track string variables corresponding to StringBuilder at some point in the program. This is further described below in the section on string builders.
\subsection{4.2.1 Intraprocedural Analysis}
In the intraprocedural analysis, we handle cases for each possible Java bytecode instruction, potentially generating a string constraint.
First, we describe the simpler cases. An assignment from a field to a local variable (or vice versa) generates a COPY constraint. When we encounter a phi node, a PHI constraint is generated.
We describe constraint generation for library methods and string builders below.
\subsection{4.2.1.1 String Library Methods}
When we encounter a method invocation, we first check if it is a method in the String library that is specially supported by the analysis. If so, we ignore the standard interprocedural analysis and generate specific constraints. The following String library methods are supported:
- \texttt{new String(s)} (constructor)
- \texttt{String.valueOf(s)} (string representation)
• `s.toString()` (to string)
• `String.concat(s, t)` (concatenation)
The first three generate a Copy constraint. For example, the bytecode corresponding to the constructor method invocation of the code below:
```
String s = new String(t);
```
will generate the following constraint:
```
s = t [COPY]
```
Similarly, a call to `String.concat(s, t)` will generate a CONCAT constraint.
### 4.2.1.2 String Builders
In Java, string builders are a class for constructing mutable strings. In order to handle string builder manipulations, we create a new string variable each time a string builder is manipulated.
By creating a new string variable, we represent a snapshot of the mutable string builder at a certain program point. The dataflow analysis computes a variable context: a mapping from program variables to string variables. After a string builder program variable is changed, while the program variable may have been mutated, the context tracks the most recent string variable associated with the string builder.
In a node of the control flow graph with multiple incoming nodes, we compute the confluence and merge the incoming contexts by creating PHI constraints for string variables associated with the same program variable.
**Example** Consider the following example below, where two strings, `s1`, `s2`, are appended to the string variable `sb`.
```
StringBuilder sb = new StringBuilder(" ");
sb.append(s1); // new string variable created - sb1
sb.append(s2); // new string variable created - sb2
```
Each time that `sb` is manipulated by appending another string, a new string variable is created. The current context computed by the dataflow analysis is updated, associating the most recent string variable `sb2` (generated by `sb.append(s2)`) with the program variable `sb`. After this code executes, if we were to look up the string variable associated with `sb`, then we would find the string variable `sb2`.
**String Builder Library Methods** The following `StringBuilder` (and the synchronized equivalent, `StringBuffer`) library methods are supported:
- `new StringBuilder(s)` (constructor)
- `sb.toString(sb)` (to string)
- `sb.append(s)` (append)
More importantly, note that the Java syntactic convenience of appending strings with `+` is compiled in bytecode to corresponding calls to `StringBuilder.append()`:
```java
String s = t + v; // t, v are String variables
String fb = "foo" + "bar";
String newS = s + "foo";
```
Our implementation handles this syntax, which is frequently used by programmers to construct strings.
### 4.2.2 Interprocedural Analysis
We describe how the interprocedural analysis handles method invocations. First, we describe the concept of *summary nodes*, then we describe how these are used to generate the CALL constraint.
#### 4.2.2.1 Summary Nodes
*Summary nodes* contain information about the string variables associated with a call graph node. Their purpose is to capture information about method calls, allowing us to generate string constraints in an interprocedural analysis.
Recall that a call graph node consists of two parts: a method and a calling context. There exists a one-to-one correspondence between call graph nodes and summary nodes.
Intuitively, a summary node contains information about the string variables of a single method. Note that new variables are generated for each calling context – two different calling contexts have different formal variables.
Formally, a summary node consists of:
1. A list of formal parameter variables $f_1, \ldots, f_n$ for each string-typed parameter
2. A formal return variable $r_m$ (if the method returns a string type)
4.2.2.2 Handling formals
A method has a set of formal arguments, corresponding to the parameters of the method. When a method is called, a series of actual arguments are supplied to the method. In this section, we describe the process of generating constraints that link the actual arguments to the formal arguments.
Suppose that the two calls below to the method `bar()` have the same calling context—that is, the calls will correspond to the same call graph nodes.
```java
private String foo() {
String firstCall = bar("a", "b");
String secondCall = bar("c", "d");
}
```
The following constraint is generated, supposing there exists some string variables `a, b, c, d` that represent the literals "a", "b", "c", "d":
\[
bar(a, b) \lor bar(c, d)
\]
Note that this constraint is only satisfied by a solution `σ` when, for the formal variables `f_1, f_2` of `bar()`:
\[
σ \models bar(a, b) \lor bar(c, d) \iff (σ(a) = f_1 \land σ(b) = f_2) \lor (σ(c) = f_1 \land σ(d) = f_2)
\]
This captures the idea that the actual arguments to a method should be “grouped” together for a more precise analysis. The possible two arguments provided to `bar()` include the argument set ("a", "b") and ("c", "d"), but note that our approach to generating constraints disallows the combination ("a", "d"), since `bar()` is never called with this combination of arguments.
In this example, we assume that both calls have the same calling context and consequently the same summary node. Note that this disjunction of argument sets is grouped together by summary node (and thus, call graph node) and not method. The constraints will distinguish argument sets between calls to the same method with different calling contexts.
4.2.2.3 Handling return
The bytecode instruction for return takes a program variable as an argument—the variable to be returned. When we flow over a return statement in bytecode, we add the returned variable to a set of tracked variables.
After analyzing a method, we add a new PHI constraint to link the formal return variable
to the possible tracked variables. For a given summary node, we generate the constraint:
\[ m_r = \phi(r_1, \ldots, r_n) \quad \text{[PHI]} \]
where \( r_1, \ldots, r_n \) are the variables potentially returned by the method at some point in
its execution.
### 4.2.2.4 Example
We describe an example of generating constraints in a very basic program. The program
calls a method, \texttt{addBar()}, that appends the string literal “bar” to its argument. Consider
the following program:
```java
public class InterproceduralExample {
static final String s = "foo";
public static void main(String[] args) {
String s1 = s;
String s2 = addBar(s1); // should be "foobar"
}
private static String addBar(String arg) {
String local = "bar";
return arg.concat(local);
}
}
```
We walk through the example line-by-line, using names to denote the locations of string
variables (e.g., \texttt{main-literal-foo} refers to a string variable in \texttt{main()} representing the
literal “foo”). Our analysis proceeds by method.
**Analyzing \texttt{main()}** When we access a static field:
```
String s1 = s;
```
it generates the constraint:
\[
\text{main-literal-foo} = \text{"foo"} \quad \text{[CONSTANT]}
\]
When we make a call to \texttt{addBar(s1)}:
```
String s2 = addBar(s1); // should be "foobar"
```
we find the summary node for the call graph node corresponding to the call, and we link the actual variable (main-literal-foo, which represents s1 in the program) to the formal variable. We also link the return variable of addBar() to the string variable corresponding to s2. This generates the constraint:
\[
\begin{align*}
\text{addBar}(\text{main-literal-foo}) & \quad \text{[CALL]} \\
\text{main-local-s2} &= \text{addBar-return} \quad \text{[COPY]}
\end{align*}
\]
**Analyzing addBar()** Our interprocedural analysis now delves into the invoked method, performing an intraprocedural analysis on addBar().
We link the formal variables to the corresponding local arguments. In bytecode, we see that addBar() is defined with a parameter named arg, which acts as a local variable:
```java
private static String addBar(String arg) {
...
```
This generates the constraint:
\[
\begin{align*}
\text{addBar-f0} &= \text{addBar-local-arg} \quad \text{[COPY]}
\end{align*}
\]
As we analyze the body of addBar(), we encounter a string literal assignment:
```java
String local = "bar";
```
generating the constraint:
\[
\begin{align*}
\text{addBar-literal-bar} &= \text{"bar"} \quad \text{[CONSTANT]}
\end{align*}
\]
Finally, we want to return the result of concatenating the string literal:
```java
return arg.concat(local);
```
Note that the bytecode implicitly creates a new local variable v0 in the bytecode, not corresponding to any variable in the source code. This first generates the constraint:
\[
\begin{align*}
\text{addBar-local-v0} &= \text{addBar-local-arg} + \text{addBar-literal-bar} \quad \text{[CONCAT]}
\end{align*}
\]
We link the formal return variable using a PHI constraint of all of the possible local variables that can be returned. In this case, there is only one, meaning that the return statement generates the following constraint (equivalent to a Copy constraint):
\[
\begin{align*}
\text{addBar-return} &= \text{phi} (\text{addBar-local-v0}) \quad \text{[PHI]}
\end{align*}
\]
Solution These constraints are translated to the language of the string solver, which solves for a satisfying assignment to all of the variables. Since this program is simple, we have an intuition that our solver should find that the variable \( \text{main-local-s}_2 \) will be “foobar”.
Consider the following solution. By inspection, it can be confirmed that this satisfies the constraints.
\[
\sigma = [\text{addBar-}f_0 \mapsto \text{“foo”} \\
\text{main-literal-}foo \mapsto \text{“foo”} \\
\text{addBar-local-arg} \mapsto \text{“foo”} \\
\text{addBar-literal-bar} \mapsto \text{“bar”} \\
\text{addBar-local-v}_0 \mapsto \text{“foobar”} \\
\text{addBar-return} \mapsto \text{“foobar”} \\
\text{main-local-s}_2 \mapsto \text{“foobar”}]
\]
4.3 Limitations
When our tool analyzes for security vulnerabilities, we will add more constraints relating to the variable of interest. For example, in the case of SQL injection, this is the string that is sent to the database as a query. This is further elaborated in the section on evaluation.
The analysis is neither sound nor complete. If the string solver does not find a satisfying solution, this does not guarantee that the program is free of vulnerabilities. If the string solver finds a satisfying solution, this does not guarantee that the program contains a vulnerability.
Our analysis is useful as a vulnerability finding tool, which is demonstrated in its evaluation in the next section. If the string solver returns a satisfying solution, it provides a starting point for the programmer to find potential security flaws.
There are three notable limitations to our analysis.
First, the analysis only generates constraints for a few major string operations, such as concatenation and assignment. It excludes many elementary operations that manipulate strings – such as substring replacement, regular expression replacement, and character replacement – due to the complexity of these analyses. These operations are useful in a precise analysis, since many security related functions, such as sanitizers, employ
such manipulations. For example, our analysis currently cannot analyze a SQL injection sanitizer that escapes single quotations, substituting ‘ for \’.
Second, the analysis does not support full-fledged web applications. Performing static analysis on the frameworks and XML configurations that most Java web frameworks use is complex, illustrated by projects such as Framework 4 Frameworks [15]. Our evaluation is performed on toy Java applications that simulate features of web application frameworks, such as web requests and database connections.
Third, due to the limitations in capabilities of available string solvers, a satisfying solution binds a string variable to exactly one string. In contrast, there are alternative ways of expressing the set of strings that a string variable can take, such as associating each string variable with a context-free grammar approximating a set of strings. This leads to certain limitations in the analysis. For example, in a loop, where a variable takes on different values during through iterations of the loop, our ability to represent such variables is severely limited.
Chapter 5
Implementation
5.1 Constraint Analysis
The analysis was developed using Accrue Bytecode, an existing framework for Java bytecode analysis. Accrue Bytecode leverages the IBM T.J. Watson Libraries for Analysis (WALA) framework for representing Java bytecode.
The analysis and translation were written in Java.
5.2 SMT Solver
The CVC4 SMT solver was used to solve string constraints. CVC4 was selected over other existing string solvers, such as Kaluza [14], HAMPI [7], and Z3-str [16], for its expanded support of string formulas. The constraints were solved using the CVC4 Java API bindings.
5.3 Evaluation
The tool was run on example programs using a 2 GHz Intel Core i7 Macbook Pro with 8 GB of RAM.
Chapter 6
Evaluation
The constraints that we generate provide a foundation for approximating the string values that variables take on at given program points. In this section, we demonstrate how our analysis can be augmented to detect security vulnerabilities in Java programs by adding certain constraints.
To evaluate our end-to-end tool for generating and solving constraints for a given program, we tested it on two example programs. These example programs contain SQL injection or cross-site scripting vulnerabilities.
6.1 Example programs
6.1.1 SQL Injection
We evaluated the tool on a 43 line example Java program, SQLToyApp, that simulates the login of a website. It accepts a username and password as input, sending the pair of strings to the database to validate. The full code is included in the appendix.
Given user inputs $\textit{username}$, $\textit{password}$, SQLToyApp sends the following query:
\begin{verbatim}
SELECT * FROM users WHERE username = 'username'
AND password = 'password'
\end{verbatim}
Our strategy for determining whether the exists a SQL injection follows. Following the model of evaluation for the HAMPI string solver, we test whether there is a satisfying assignment of strings with the query string containing the substring "1' or '1' = '1". [7] To achieve this, we add the following assertion in the string solver, using the
string variable \( q \) for the query string sent to the database.
\[
q \text{ contains } "1' or '1' = '1"
\]
We fix the input for \$username\ to be "admin", assuming that the attacker aims to login a username for the administrative account. However, note that an attacker could also input \texttt{username} = "'1' OR '1' = '1'", which would return a query set containing all users – the first of which would be authenticated in the login.
This is motivated by the fact that the most common SQL injection attacks take advantage of unescaped quotation and tautologies (\texttt{or '1' = '1'}). More complex SQL injection vectors, such as those with stored procedures, would require a different approach.
**Results** Our tool ran the pointer analyses, generated 30 string constraints, and discovered a satisfying solution containing a vulnerability in 9.054 seconds. The satisfying solution contained 31 string variables. Of particular interest is the assignment for the query variable \( q \), for which the string solver solution finds that:
\[
q = \text{SELECT * FROM users WHERE username = 'admin' AND password = '1' or '1' = '1'}
\]
This corresponds to the scenario where a malicious user provides the following inputs: \$username = 'admin' and \$password = 1' or '1' = '1'.
This illustrates that the tool produces a string assignment that highlights a SQL injection vulnerability in SQLToyApp. Given a Java program as input and additional constraints designating the strings of interest, our tool outputs a satisfying solution that indicates a SQL injection vulnerability.
### 6.1.2 Cross-site scripting
We evaluated the tool on a 32 line example Java program, XSSToyApp, that simulates the canonical example of a blog post, rendering an HTML page that takes a user comment as input. The full code is included in the appendix.
Recall that if user comments in a blog are unescaped, then a malicious user can simply provide \texttt{<script>} tags containing arbitrary code that will be executed by any future clients viewing the comments. Our strategy for finding an XSS vulnerability is to add
the following assertion to the string solver, where html is the rendered HTML of the final webpage:
\[
\text{html contains } "\text{<script>alert(1);</script>"}
\]
where alert(1) serves as an arbitrary choice of JavaScript code. A malicious user would replace this with something more harmful.
**Results** Our tool ran the pointer analyses, generated 24 string constraints, and discovered a satisfying solution containing a vulnerability in 8.001 seconds. The satisfying solution for the tool assigns the following value to the string variable for the rendered HTML:
\[
\text{html = } "\text{<!DOCTYPE html >}<html ><body > \text{<div >}<script >alert(1);</script ></div ></body ></html >"
\]
This corresponds to the scenario where a malicious user writes the following comment:
\[
<\text{script>alert(1);</script>}
\]
Similar to the SQL injection example, this applies the tool to a program representative of the canonical persistent XSS vulnerability.
Chapter 7
Conclusion
We designed a tool to detect security vulnerabilities in Java web applications, evaluating the tool on simulated web applications.
First, the tool generated string constraints from a Java program using interprocedural dataflow analysis. Second, it translated the string constraints to the language of the CVC4 string solver. Finally, the string solver generated a satisfying assignment of string variables from the given constraints.
We evaluated the tool by demonstrating that it could find satisfying assignments indicative of security vulnerabilities – SQL injection and cross-site scripting – in example Java programs.
However, the tool was subject to certain limitations. Although the analysis was neither sound nor complete, the tool can be used to guide programmers to potential security vulnerabilities.
This work provides the foundation for an end-to-end tool that generates string constraints and pipes them into an SMT solver. Future work in the area could expand the tool to support the infrastructure of Java web frameworks and further string library operations, which would allow analysis of sanitizing functions commonly used to secure applications.
Finally, the goal of leveraging an existing string solver highlights the limitations of existing string solvers, which often return a satisfying assignment that assigns each string variable to a single string value. String solvers with more expressiveness for solutions, such as associating each string variable with a context-free grammar for the set of possible strings, would allow for a more powerful analysis.
Appendices
Appendix A
Example programs
Below are the example programs, SQLToyApp, XSSToyApp, used in the evaluation.
A.1 SQL Injection
```java
package stringconstraint.tests;
import java.lang.StringBuilder;
/**
* An application that receives a username and a password from a form field
*/
public class SQLToyApp {
public static void main(String args[]) {
SQLToyApp app = new SQLToyApp();
app.login(args[0], args[1]);
}
public void login(String username, String password) {
String query = constructQuery(username, password);
DatabaseConnection dbc = new DatabaseConnection();
dbc.sendQuery(query);
}
private String constructQuery(String username, String password) {
StringBuilder query = new StringBuilder("SELECT * FROM users WHERE ");
query.append("username = ");
query.append(username);
query.append("'");
query.append(username);
query.append("'");
```
query.append(" AND ");
query.append("password = '\');
query.append(password);
query.append("'");
query.append(";");
return query.toString();
}
/**
* Mock database connection.
*/
public class DatabaseConnection {
public void sendQuery(String q) {
// Do nothing.
}
}
LISTING A.1: SQLToyApp, a program simulating a query for a user login
### A.2 XSS
package stringconstraint.tests;
import java.lang.StringBuilder;
/**
* An example application that renders an HTML page containing
* an unescaped comment from a user.
*/
public class XSSToyApp {
public static void main(String args[]) {
// Suppose the comment, args[0], is retrieved from a database elsewhere
XSSToyApp app = new XSSToyApp();
app.renderPage(args[0]);
}
private String buildPage(String comment) {
StringBuilder sb = new StringBuilder("<!DOCTYPE html>");
sb.append("<html>");
sb.append("<body>");
sb.append("<div>");
sb.append(comment);
}
}
sb.append("</div>");
sb.append("</body>");
sb.append("</html>");
return sb.toString();
}
public void renderPage(String comment) {
String html = buildPage(comment);
// Do something with the built page.
}
Listing A.2: XSSToyApp, a program simulating the rendering of a webpage
References
|
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/14398534/LI-SENIORTHESIS-2015.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 12455, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 73188, "total-output-tokens": 14854, "length": "2e13", "weborganizer": {"__label__adult": 0.0004031658172607422, "__label__art_design": 0.00029397010803222656, "__label__crime_law": 0.000797271728515625, "__label__education_jobs": 0.00095367431640625, "__label__entertainment": 6.35981559753418e-05, "__label__fashion_beauty": 0.00015616416931152344, "__label__finance_business": 0.00019550323486328125, "__label__food_dining": 0.0002884864807128906, "__label__games": 0.0006389617919921875, "__label__hardware": 0.0007882118225097656, "__label__health": 0.0004701614379882813, "__label__history": 0.0001928806304931641, "__label__home_hobbies": 8.821487426757812e-05, "__label__industrial": 0.00034165382385253906, "__label__literature": 0.0002419948577880859, "__label__politics": 0.00025272369384765625, "__label__religion": 0.00035452842712402344, "__label__science_tech": 0.0216064453125, "__label__social_life": 0.00010001659393310548, "__label__software": 0.00652313232421875, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.0002994537353515625, "__label__transportation": 0.00041365623474121094, "__label__travel": 0.00015306472778320312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56012, 0.02529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56012, 0.71355]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56012, 0.81952]], "google_gemma-3-12b-it_contains_pii": [[0, 912, false], [912, 1181, null], [1181, 3099, null], [3099, 3934, null], [3934, 4762, null], [4762, 6780, null], [6780, 8807, null], [8807, 9699, null], [9699, 11002, null], [11002, 13161, null], [13161, 14423, null], [14423, 16421, null], [16421, 18165, null], [18165, 19829, null], [19829, 21278, null], [21278, 23151, null], [23151, 24800, null], [24800, 24923, null], [24923, 25409, null], [25409, 26510, null], [26510, 28076, null], [28076, 29370, null], [29370, 32056, null], [32056, 33996, null], [33996, 35715, null], [35715, 37674, null], [37674, 39119, null], [39119, 41138, null], [41138, 43213, null], [43213, 44334, null], [44334, 45053, null], [45053, 46429, null], [46429, 48537, null], [48537, 49499, null], [49499, 51107, null], [51107, 51118, null], [51118, 52080, null], [52080, 53083, null], [53083, 53368, null], [53368, 55158, null], [55158, 56012, null]], "google_gemma-3-12b-it_is_public_document": [[0, 912, true], [912, 1181, null], [1181, 3099, null], [3099, 3934, null], [3934, 4762, null], [4762, 6780, null], [6780, 8807, null], [8807, 9699, null], [9699, 11002, null], [11002, 13161, null], [13161, 14423, null], [14423, 16421, null], [16421, 18165, null], [18165, 19829, null], [19829, 21278, null], [21278, 23151, null], [23151, 24800, null], [24800, 24923, null], [24923, 25409, null], [25409, 26510, null], [26510, 28076, null], [28076, 29370, null], [29370, 32056, null], [32056, 33996, null], [33996, 35715, null], [35715, 37674, null], [37674, 39119, null], [39119, 41138, null], [41138, 43213, null], [43213, 44334, null], [44334, 45053, null], [45053, 46429, null], [46429, 48537, null], [48537, 49499, null], [49499, 51107, null], [51107, 51118, null], [51118, 52080, null], [52080, 53083, null], [53083, 53368, null], [53368, 55158, null], [55158, 56012, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56012, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56012, null]], "pdf_page_numbers": [[0, 912, 1], [912, 1181, 2], [1181, 3099, 3], [3099, 3934, 4], [3934, 4762, 5], [4762, 6780, 6], [6780, 8807, 7], [8807, 9699, 8], [9699, 11002, 9], [11002, 13161, 10], [13161, 14423, 11], [14423, 16421, 12], [16421, 18165, 13], [18165, 19829, 14], [19829, 21278, 15], [21278, 23151, 16], [23151, 24800, 17], [24800, 24923, 18], [24923, 25409, 19], [25409, 26510, 20], [26510, 28076, 21], [28076, 29370, 22], [29370, 32056, 23], [32056, 33996, 24], [33996, 35715, 25], [35715, 37674, 26], [37674, 39119, 27], [39119, 41138, 28], [41138, 43213, 29], [43213, 44334, 30], [44334, 45053, 31], [45053, 46429, 32], [46429, 48537, 33], [48537, 49499, 34], [49499, 51107, 35], [51107, 51118, 36], [51118, 52080, 37], [52080, 53083, 38], [53083, 53368, 39], [53368, 55158, 40], [55158, 56012, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56012, 0.10065]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0f4b65872cd6d71fbeed8c6c6c316a739728d167
|
<table>
<thead>
<tr>
<th>PART 1</th>
<th>INTRODUCING BPMN 2.0 AND ACTIVITI.................................1</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Introducing the Activiti framework 3</td>
</tr>
<tr>
<td>2</td>
<td>BPMN 2.0: what’s in it for developers? 19</td>
</tr>
<tr>
<td>3</td>
<td>Introducing the Activiti tool stack 32</td>
</tr>
<tr>
<td>4</td>
<td>Working with the Activiti process engine 49</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>PART 2</th>
<th>IMPLEMENTING BPMN 2.0 PROCESSES WITH ACTIVITI.............85</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>Implementing a BPMN 2.0 process 87</td>
</tr>
<tr>
<td>6</td>
<td>Applying advanced BPMN 2.0 and extensions 112</td>
</tr>
<tr>
<td>7</td>
<td>Dealing with error handling 146</td>
</tr>
<tr>
<td>8</td>
<td>Deploying and configuring the Activiti Engine 169</td>
</tr>
<tr>
<td>9</td>
<td>Exploring additional Activiti modules 193</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>PART 3</th>
<th>ENHANCING BPMN 2.0 PROCESSES ................................223</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>Implementing advanced workflow 225</td>
</tr>
<tr>
<td>11</td>
<td>Integrating services with a BPMN 2.0 process 260</td>
</tr>
<tr>
<td>12</td>
<td>Ruling the business rule engine 286</td>
</tr>
</tbody>
</table>
13 ■ Document management using Alfresco 311
14 ■ Business monitoring and Activiti 340
**PART 4** MANAGING BPMN 2.0 PROCESSES ........................................367
15 ■ Managing the Activiti Engine 369
This first part of the book provides an introduction to the Activiti framework and the background about the BPMN 2.0 standard. In chapter 1, we’ll cover how to set up an Activiti environment, starting with the download of the Activiti framework. In chapter 2, you’ll be introduced to the main elements of the BPMN 2.0 standard in order to create process definitions. Chapter 3 offers an overview of the Activiti framework’s main components, including the Activiti Designer and Explorer. Finally, in chapter 4, we’ll discuss the Activiti API with several short code examples.
Every day, your actions are part of different processes. For example, when you order a book in an online bookstore, a process is executed to get the book paid for, packaged, and shipped to you. When you need to renew your driver’s license, the renewal process often requires a new photograph as input. Activiti provides an open source framework to design, implement, and run processes. Organizations can use Activiti to implement their business processes without the need for expensive software licenses.
This chapter will get you up and running with Activiti in 30 minutes. First, we’ll take a look at the different components of the Activiti tool stack, including a Modeler, Designer, and a REST web application. Then, we’ll discuss the history of the Activiti framework and compare its functionality with its main competitors, jBPM and BonitaSoft.
Before we dive into code examples in section 1.4, we’ll first make sure the Activiti framework is installed correctly. At the end of this chapter, you’ll have a running Activiti environment and a deployable example.
First, let’s look at Activiti’s tool stack and its different components, including the modeling environment, the engine, and the runtime explorer application.
### 1.1 The Activiti tool stack
The core component of the Activiti framework is the process engine. The process engine provides the core capabilities to execute Business Process Model and Notation (BPMN) 2.0 processes and create new workflow tasks, among other things. You can find the BPMN specification and lots of examples at [www.bpmn.org](http://www.bpmn.org), and we’ll go into more detail about BPMN in chapter 2. The Activiti project contains a couple of tools in addition to the Activiti Engine. Figure 1.1 shows an overview of the full Activiti tool stack.
Let’s quickly walk through the different components listed in figure 1.1. With the Activiti Modeler, business and information analysts are capable of modeling a BPMN 2.0-compliant business process in a web browser. This means that business processes can easily be shared—no client software is needed before you can start modeling. The Activiti designer is an Eclipse-based plugin, which enables a developer to enhance the modeled business process into a BPMN 2.0 process that can be executed on the Activiti process engine. You can also run unit tests, add Java logic, and create deployment artifacts with the Activiti Designer.
In addition to the design tools, Activiti provides a number of supporting tools. With Activiti Explorer, you can get an overview of deployed processes and even dive into the database tables underneath the Activiti process engine. You can also use Activiti Explorer to interact with the deployed business processes. For example, you can get a list of tasks that are already assigned to you. You can also start a new process instance and look at the status of that newly created process instance in a graphical diagram.
 **Figure 1.1.** An overview of the Activiti tool stack: in the center, the Activiti process engine, and on the right and left sides, the accompanying modeling, design, and management tools. The grayed-out components are add-ons to the core Activiti framework.
Finally, there’s the Activiti REST component, which provides a web application that starts the Activiti process engine when the web application is started. In addition, it offers a REST API that enables you to communicate remotely with the Activiti Engine.
The different components are summarized in table 1.1.
<table>
<thead>
<tr>
<th>Component name</th>
<th>Short description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Activiti Engine</td>
<td>The core component of the Activiti tool stack that performs the process engine functions, such as executing BPMN 2.0 business processes and creating workflow tasks.</td>
</tr>
<tr>
<td>Activiti Modeler</td>
<td>A web-based modeling environment for creating BPMN 2.0-compliant business process diagrams. This component is donated by Signavio, which also provides a commercial modeling tool, named the Signavio Process Editor.</td>
</tr>
<tr>
<td>Activiti Designer</td>
<td>An Eclipse plugin that can be used to design BPMN 2.0-compliant business processes with the addition of Activiti extensions, such as a Java service task and execution listeners. You can also unit test processes, import BPMN 2.0 processes, and create deployment artifacts.</td>
</tr>
<tr>
<td>Activiti Explorer</td>
<td>A web application that can be used for a wide range of functions in conjunction with the Activiti Engine. You can, for example, start new process instances and get a list of tasks assigned to you. In addition, you can perform simple process management tasks, like deploying new processes and retrieving the process instance status.</td>
</tr>
<tr>
<td>Activiti REST</td>
<td>A web application that provides a REST interface on top of the Activiti Engine. In the default installation (see section 1.1.3), the Activiti REST application is the entry point to the Activiti Engine.</td>
</tr>
</tbody>
</table>
You can’t start developing without a clear understanding of the Activiti framework and the architecture that’s built around a state machine. Let’s take a closer look at the history of the Activiti framework and discuss the Activiti Engine in more detail.
### 1.2 Getting to know Activiti
When you start working with a new framework, it’s always good to know some project background and have an understanding of the main components. In this section, we’ll be looking at exactly that.
#### 1.2.1 A little bit of history
The Activiti project was started in 2010 by Tom Baeyens and Joram Barrez, the former founder and the core developer of jBPM (JBoss BPM), respectively. The goal of the Activiti project is to build a rock-solid open source BPMN 2.0 process engine. In the next chapter, we’ll talk in detail about the BPMN 2.0 specification, but in this chapter we’ll focus on the Activiti framework itself and getting it installed and up and running with simple examples.
Activiti is funded by Alfresco (known for its open source document management system of the same name; see www.alfresco.com and chapter 13 for more details), but Activiti acts as an independent, open source project. Alfresco uses a process engine to
support features such as a review and approval process for documents, which means that the document has to be approved by one user or a group of users. For this kind of functionality, Activiti is integrated into the Alfresco system to provide the necessary process and workflow engine capabilities.
NOTE jBPM was used in the past instead of Activiti to provide this process and workflow functionality. jBPM is still included in Alfresco, but it may be deprecated at some point in time.
Besides running the Activiti process engine in Alfresco, Activiti is built to run stand-alone or embedded in any other system. In this book, we’ll focus on running Activiti outside the Alfresco environment, but we’ll discuss the integration opportunities between Activiti and Alfresco in detail in chapter 13.
In 2010, the Activiti project started off quickly and succeeded in producing monthly (!) releases of the framework. In December 2010, the first stable and production-ready release (5.0) was made available. The Activiti developer community, including companies like SpringSource, FuseSource, and Mulesoft, has since been able to develop new functionality on a frequent basis. In this book, we’ll explore this contributed functionality, such as the Spring integration (chapter 4) and the Mule and Apache Camel integration (chapter 11).
But first things first. What can you do with a process engine? Why should you use the Activiti framework? Let’s discuss the core component, the Activiti Engine.
1.2.2 The basics of the Activiti Engine
Activiti is a BPMN 2.0 process-engine framework that implements the BPMN 2.0 specification. It’s able to deploy process definitions, start new process instances, execute user tasks, and perform other BPMN 2.0 functions, which we’ll discuss throughout this book.
But at its core, the Activiti Engine is a state machine. A BPMN 2.0 process definition consists of elements like events, tasks, and gateways that are wired together via sequence flows (think of arrows). When such a process definition is deployed on the process engine and a new process instance is started, the BPMN 2.0 elements are executed one by one. This process execution is similar to a state machine, where there’s an active state and, based on conditions, the state execution progresses to another state via transitions (think again of arrows). Let’s look at an abstract figure of a state machine and see how it’s implemented in the Activiti Engine (figure 1.2).
In the Activiti Engine, most BPMN 2.0 elements are implemented as a state. They’re connected with leaving and arriving transitions, which are called sequence flows in BPMN 2.0. Every state or corresponding BPMN 2.0 element can have attached a piece of logic that will be executed when the process instance enters the state. In figure 1.2, you can also look up the interface and implementing class that are used in the Activiti Engine. As you can see, the logic interface ActivityBehavior is implemented by a lot of classes. That’s because the logic of a BPMN 2.0 element is implemented there.
Getting to know Activiti
When you see a complex BPMN 2.0 example later on in the book, remember that, in essence, it’s a rather simple state machine. Now let’s look at a couple other open source process engines that offer functionality similar to Activiti, and also consider the differences.
1.2.3 Knowing the competitors
When you’re interested in an open source process engine like Activiti, it’s always good to know a little bit more about the competing open source frameworks. Because the main developers of Activiti were previously involved with the JBoss BPM or jBPM framework, there’s also some controversy surrounding this discussion. It’s obvious that jBPM and Activiti share a lot of the same architectural principles, but there are also many differences. We’ll only discuss the two main open source competitors of Activiti:
- **JBoss BPM or jBPM**—An open source process engine that first supported the custom jPDL process language, but, because version 5.0 supports BPMN 2.0, the jBPM project has merged with the JBoss Drools project (an open source business-rule management framework) and replaced Drools Flow as the rule flow language for the Drools framework.
- **BonitaSoft**—An open source process engine that provides support for the BPMN 2.0 process language. The main differentiators of BonitaSoft are the large set of supported elements and the integrated development environment.
Let’s discuss the similarities and differences between Activiti and its two competitors in a bit more detail.
**Activiti and jBPM**
Activiti and jBPM have a lot in common: they’re both developer-oriented process engine frameworks built around the concept of a state machine (see section 1.2.2).
Because jBPM 5 also implements the BPMN 2.0 specification, a lot of similar functionality can be found. But there are a number of differences that are important to mention; see table 1.2.
Table 1.2 Main differences between Activiti and jBPM
<table>
<thead>
<tr>
<th>Description</th>
<th>Activiti</th>
<th>jBPM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Community members</td>
<td>Activiti has a base team consisting of Alfresco employees. In addition, companies like SpringSource, FuseSource, and MuleSoft provide resources on specific components. There are also individual open source developers committing to the Activiti project.</td>
<td>jBPM has a base team of JBoss employees. In addition, there are individual committers.</td>
</tr>
<tr>
<td>Spring support</td>
<td>Activiti has native Spring support, which makes it easy to use Spring beans in your processes and to use Spring for JPA and transaction management.</td>
<td>jBPM has no native Spring support, but you can use Spring with additional development effort.</td>
</tr>
<tr>
<td>Business rules support</td>
<td>Activiti provides a basic integration with the Drools rule engine to support the BPMN 2.0 business rule task.</td>
<td>jBPM and Drools are integrated on a project level, so there’s native integration with Drools on various levels.</td>
</tr>
<tr>
<td>Additional tools</td>
<td>Activiti provides modeler (Oryx) and designer (Eclipse) tools to model new process definitions. The main differentiator is the Activiti Explorer, which provides an easy-to-use web interface to start new processes, work with tasks and forms, and manage running processes. In addition, it provides ad hoc task support and collaboration functionality.</td>
<td>jBPM also provides a modeler based on the Oryx project and a Eclipse designer. With a web application, you can start new process instances and work with tasks. The form support is limited.</td>
</tr>
<tr>
<td>Project</td>
<td>Activiti has a strong developer and user community with a solid release schedule of two months. Its main components are the Engine, Designer, Explorer, and REST application.</td>
<td>jBPM has a strong developer and user community. The release schedule isn’t crystal clear, and some releases have been postponed a couple of times. The Designer application is (at the moment of writing) still based on Drools Flow, and the promised new Eclipse plugin keeps getting postponed.</td>
</tr>
</tbody>
</table>
It’s always difficult to compare two open source frameworks objectively, and this book is about Activiti. This book by no means presents the only perspective on the differences between the frameworks, but it identifies a number of differences that you can consider when making a choice between them.
Next up is the comparison between Activiti and BonitaSoft.
**ACTIVITI AND BONITASOFT**
BonitaSoft is the company behind Bonita Open Solution, an open source BPM product. There are a number of differences between Activiti and BonitaSoft:
Activiti is developer-focused and provides an easy-to-use Java API to communicate with the Activiti Engine. BonitaSoft provides a tool-based solution where you can click and drag your process definition and forms.
With Activiti, you’re in control of every bit of the code you write. With BonitaSoft, the code is often generated from the developer tool.
BonitaSoft provides a large set of connectivity options to a wide range of third-party products. This means it’s easy to configure a task in the developer tool to connect to SAP or query a particular database table. With Activiti, the connectivity options are also very broad (due to the integration with Mule and Camel), but they’re more developer focused.
Although both frameworks focus on supporting the BPMN 2.0 specification and offering a process engine, they take different implementation angles. BonitaSoft provides a development tool where you can draw your processes and configure and deploy them without needing to write one line of code. This means that you aren’t in control of the process solution you’re developing. Activiti provides an easy-to-use Java API that will need some coding, but, in the end, you can easily embed it into an application or run it on every platform you’d like.
As you can see, Activiti is not the only open source process engine capable of running BPMN 2.0 process models, but it’s definitely a flexible and powerful option, and one that we’ll discuss in detail in this book. Now that you know the different components of Activiti, let’s get the framework installed on your development machine.
1.3 Installing the Activiti framework
The first thing you have to do is point your web browser to the Activiti website at www.activiti.org. You’ll be guided to the latest release of Activiti via the download button. Download the latest version and unpack the distribution to a logical folder, such as
C:\activiti (Windows)
/usr/local/activiti (Linux or Mac OS)
This isn’t the beginning of a long and complex installation procedure—with Activiti, there’s a setup directory that contains an Ant build file that installs the Activiti framework. The directory structure of the distribution is shown in figure 1.3.
Before you go further with the installation procedure, make sure that you’ve installed a Java 5 SDK or higher, pointed the JAVA_HOME environment variable to the Java installation directory, and installed a current version (1.8.x or higher) of Ant (http://ant.apache.org). Shortcuts to the Java SDK and the Ant framework are also provided on the Activiti download page.
The last thing to confirm is that you have an internet connection available without a proxy, because the Ant build file will download additional packages. If you’re behind a proxy, make sure you’ve configured the Ant build to use that proxy (more info can be found at http://ant.apache.org/manual/proxy.html).
When you open a terminal or command prompt and go to the setup directory shown in figure 1.3, you only have to run the ant command (or ant demo.start). This will kick off the Activiti installation process, which will look for a build.xml file in the setup directory. The installation performs the following steps:
1. An H2 database is installed to /apps/h2, and the H2 database is started on port 9092.
2. The Activiti database is created in the running H2 database.
3. Apache Tomcat 6.0.x is downloaded and installed to /apps/apache-tomcat-6.0.x, where x stands for the latest version.
4. Demo data, including users, groups, and business processes, are installed to the H2 database.
5. The Activiti REST and Activiti Explorer WARs are copied to the webapps directory of Tomcat.
6. Tomcat is started, which means that the Activiti Explorer and REST applications are running.
7. Depending on on your OS, a web browser is started by the installation script with the Activiti Explorer URL. On Windows 7, no web browser is started; in other versions of Windows, the web browser is only started if you have Firefox installed.
When the Ant script has finished, you have the Activiti tool stack installed and running. That’s not bad for about a minute of installation time. The Ant build file isn’t only handy for installing Activiti but also for doing common tasks, like stopping and starting the H2 database (ant h2.stop, ant h2.start) and the Tomcat server (ant tomcat.stop, ant tomcat.start) and for re-creating a vanilla database schema (ant internal.db.drop, ant internal.db.create). It’s worth the time to look at the Ant targets in the Ant build file.
The installation of Activiti consists foremost of two web applications being deployed to a Tomcat server and a ready-to-use H2 database being created with example processes, groups, and users already loaded. Figure 1.4 shows the installation result in a schematic overview.
Notice that we haven’t yet installed the Activiti Modeler and Designer applications. These components aren’t part of the installation script and have to be installed separately. We’ll discuss how to do this in chapter 3.
To verify whether the installation has succeeded, the Activiti Explorer, listed in table 1.3, should be available via your favorite web browser. You can use the user kermit with password kermit to log in. To work with the Activiti REST application, you can use a REST client, such as the REST client Firefox plugin. You can read more about the Activiti REST API in chapter 8.
Implementing your first process in Activiti
Let’s try to implement a simplified version of a book order process. We could use the Activiti Modeler to first model the process, and the Activiti Designer to implement and deploy the process, but it’s better to start off with a BPMN 2.0 XML document for learning purposes. There won’t be any drag-and-drop development, but get ready for some XML hacking.
Table 1.3 The URI of the Activiti Explorer and REST web applications available for you after the installation of Activiti
<table>
<thead>
<tr>
<th>Application name</th>
<th>URI</th>
<th>Short description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Activiti Explorer</td>
<td><a href="http://localhost:8080/activiti-explorer">http://localhost:8080/activiti-explorer</a></td>
<td>The Explorer application can be used to work with the deployed processes. This is a good starting point from which to try the example processes.</td>
</tr>
<tr>
<td>Activiti REST</td>
<td><a href="http://localhost:8080/activiti-rest/service">http://localhost:8080/activiti-rest/service</a></td>
<td>The REST application can be used to gain remote access to the Activiti Engine via a REST interface. For all available REST services, you can look in the Activiti user guide that can be found on the Activiti website.</td>
</tr>
</tbody>
</table>
By trying the Activiti Explorer application, you can verify whether the installation was successful. After logging in and clicking on the Process tab, you should get a list of the examples processes that are deployed on the Activiti Engine.
Working with demo processes is fun, but it’s even better to try out your own developed business process.
1.4 Implementing your first process in Activiti
An overview of the installation result of the Activiti tool stack, including a running Tomcat server and H2 database with the two Activiti web applications already deployed.
14.1 Say hello to Activiti
We’ll keep things simple for now; if you don’t understand every construct already, don’t be worried—we’ll discuss the BPMN 2.0 elements in more detail in chapter 2.
In the following listing, a starter for the BPMN 2.0 XML definition of the book order process is shown with only a start event, an end event, and a sequence flow to connect the two.
Listing 1.1 bookorder.simple.bpmn20.xml document with only a start and end event
```xml
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
targetNamespace="http://www.bpmnwithactiviti.org">
<process id="simplebookorder" name="Order book">
<startEvent id="startevent1" name="Start"/>
<sequenceFlow id="sequenceflow1" sourceRef="startevent1" targetRef="endevent1"/>
<endEvent id="endevent1" name="End"/>
</process>
</definitions>
```
A BPMN 2.0 XML definition always starts with a `definitions` element that is identified with a namespace from the OMG BPMN specification. Each process definition must also define a namespace; here, you define a `targetNamespace` with the book’s website as its attribute value. Activiti also provides a namespace, which enables you to use Activiti extensions to the BPMN 2.0 specification, as you’ll see in chapter 4. You can now run this simple process to test if you’ve correctly defined the process definition and the environment setup in the right manner.
To test this process, you have to create a Java project in your favorite editor. In this book, we’ll use Eclipse for the example description, because the Eclipse Designer is only available as an Eclipse plugin. But it’s easier to download the source code from the book’s website at Manning (or you can go directly to the Google code repository at [http://code.google.com/p/activitiinaction](http://code.google.com/p/activitiinaction)) and import the examples from there.
When you import the `bpmn-examples` project (used in this chapter), the Activiti libraries have to be added to the Java build path. The book’s source code uses Maven to retrieve all the necessary dependencies. The sample project’s code structure is explained in detail in chapter 4 and appendix A. But, starting from Eclipse Indigo (version 3.7.x), there’s good built-in Maven support, so it’s easy to get it working. Activate the Maven project capabilities by choosing the Configure–Convert to Maven Project option in the project menu when you right-click on the `bpmn-examples` project in Eclipse. Eclipse will download all the necessary dependencies and configure the classpath for you.
With the dependencies in place, you can look for the `SimpleProcessTest` unit test in the `org.bpmnwithactiviti.chapter1` package of the `bpmn-examples` project. The `SimpleProcessTest` class contains one test method, shown in the following listing.
Implementing your first process in Activiti
Listing 1.2 First example of a JUnit test for a Activiti process deployment
```java
public class SimpleProcessTest {
@Test
public void startBookOrder() {
ProcessEngine engine = ProcessEngineConfiguration
.createStandaloneInMemProcessEngineConfiguration()
.buildProcessEngine();
RuntimeService runtimeService = engine.getRuntimeService();
RepositoryService repositoryService = engine.getRepositoryService();
repositoryService.createDeployment()
.addClasspathResource("bookorder.simple.bpmn20.xml")
.deploy();
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("simplebookorder");
assertNotNull(processInstance.getId());
System.out.println("id " + processInstance.getId() + " = " + processInstance.getProcessDefinitionId());
}
}
```
In just a few lines of code, you’re able to start up the Activiti process engine, deploy the book order process XML file from listing 1.1 to it, and start a process instance for the deployed process definition.
The process engine can be created with the `ProcessEngineConfiguration`, which can be used to start the Activiti engine and the H2 database. In this case, the process engine is started with an in-memory H2 database. There are different ways to start up an Activiti engine, and we’ll look at the options in detail in chapter 4.
**NOTE** Activiti can also run on database platforms other than H2, such as Oracle or PostgreSQL.
The next important step in listing 1.2 is the deployment of the bookorder.simple.bpmn20.xml file from listing 1.1. To deploy a process from Java code, you need to access the `RepositoryService` from the `ProcessEngine` instance. Via the `RepositoryService` instance, you can add the book order XML file to the list of classpath resources to deploy it to the process engine. The process engine will validate the book order process file and create a new process definition in the H2 database.
It’s easy to start a process instance based on the newly deployed process definition by invoking the `startProcessInstanceByKey` method on the `RuntimeService` instance, which is also retrieved from the `ProcessEngine` instance. The key `bookorder`, which is passed as the process key parameter, should be equal to the process id attribute from the book order process of listing 1.1. A process instance is
stored to the H2 database, and a process instance ID that can be used as a reference to this specific process instance is created. This identifier is very important.
You can now run the unit test and the result should be green. In the console, you should see a message like this:
```
id 4 simplebookorder:1:3
```
This message means that the process instance ID is 4 and the process definition that was used to create the instance was the `simplebookorder` definition with version 1 and the process definition database ID is 3.
Now that we’ve covered the basics, let’s implement a bit more of the book order process; then you can use the Activiti Explorer to claim and finish a user task for your process.
### 14.2 Implementing a simple book order process
It would be a shame to finish chapter 1 with an example that only contains a start and an end event. Let’s enhance your simple book order process with a script task and a user task so you can see a bit of action on the Activiti engine. First, the script task will print an ISBN number that will be provided as input to the book order process when it’s started in a unit test (like this example) or in the Activiti Explorer. Then, a user task will be used to manually handle the book ordering.
Activiti allows you to use the scripting language you want, but Groovy is supported by default. We’ll use a line of Groovy to print the ISBN process variable. The following listing shows a revised version of the book order process.
#### Listing 1.3 A book order process with a script and user task
```xml
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
targetNamespace="http://www.bpmnwithactiviti.org">
<process id="bookorder" name="Order book">
<startEvent id="startevent1" name="Start"/>
<sequenceFlow id="sequenceflow1" name="Validate order"
sourceRef="startevent1" targetRef="scripttask1"/>
<scriptTask id="scripttask1"
name="Validate order"
scriptFormat="groovy">
<script>
out.println "validating order for isbn " + isbn;
</script>
</scriptTask>
<sequenceFlow id="sequenceflow2" name="Sending to sales"
sourceRef="scripttask1" targetRef="usertask1"/>
<userTask id="usertask1" name="Work on order">
<documentation>book order user task</documentation>
<potentialOwner>
<resourceAssignmentExpression>
<formalExpression>sales</formalExpression>
</resourceAssignmentExpression>
</potentialOwner>
</userTask>
</process>
</definitions>
```
1. **Defines script task**
2. **Prints ISBN**
3. **Defines user task**
4. **Assigns task to sales group**
Implementing your first process in Activiti
With the two additional tasks added to the process definition, the number of lines in the XML file grows quite a bit. In chapter 3, we’ll look at the Activiti Designer, which does the BPMN 2.0 XML generation for you and provides a drag-and-drop type of process development.
The script task contains a `out:println` variable, which is a Groovy reserved word within the Activiti script task for printing text to the system console. Also notice that the `isbn` variable can be used directly in the script code without any additional programming.
The user task contains a potential owner definition, which means that the task can be claimed and completed by users that are part of the group sales. When you run this process in a minute, you’ll see in the Activiti Explorer that this user task is available in the task list for the user kermit, who is part of the sales group.
Now that you’ve added more logic to the process, you also need to change your unit test. One thing you need to add is an `isbn` process variable when starting the process. To test whether the user task is created, you also need to query the Activiti engine database for user tasks that can be claimed by the user kermit.
Take a look at the changed unit test in the next code listing. You can again find this unit test class in the `bpmn-examples` project in the `org.bpmnwithactiviti.chapter1` package.
### Listing 1.4 A unit test with a process variable and user task query
```java
public class BookOrderTest {
@Test
public void startBookOrder() {
ProcessEngine processEngine = ProcessEngineConfiguration
.createStandaloneProcessEngineConfiguration()
.buildProcessEngine();
RepositoryService repositoryService = processEngine.getRepositoryService();
RuntimeService runtimeService = processEngine.getRuntimeService();
IdentityService identityService = processEngine.getIdentityService();
TaskService taskService = processEngine.getTaskService();
repositoryService.createDeployment()
.addClasspathResource("bookorder.bpmn20.xml")
.deploy();
Map<String, Object> variableMap = new HashMap<String, Object>();
```
1. Gets TaskService instance
variableMap.put("isbn", "123456");
identityService.setAuthenticatedUserId("kermit");
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey(
"bookorder", variableMap);
assertNotNull(processInstance.getId());
List<Task> taskList = taskService.createTaskQuery()
.taskCandidateUser("kermit")
.list()
assertEquals(1, taskList.size());
System.out.println("found task " +
taskList.get(0).getName());
taskService.complete(taskList.get(0).getId());
}
}
The BookOrderTest unit test starts a process instance with a Map of variables \(\text{\texttt{2}}\) that contains one variable with a name of isbn and a value of 123456. In addition, when the process instance has been started, a TaskService instance \(\text{\texttt{1}}\) is used to retrieve the tasks available to be claimed by the user kermit. Because there’s only one process instance running with one user task, you test that the number of tasks retrieved is 1.
Also note that you’re not using the in-memory database anymore but have switched (createStandaloneProcessEngineConfiguration) to the default stand-alone H2 database that’s installed as part of the Activiti installation procedure. This means that, before running the unit test, the H2 database should be running (ant h2.start or ant demo.start). Now you can run the unit test to see if your changes work. In the console, you should see a similar output to
validating order for isbn 123456
found task Work on order
The first line is printed by the Groovy script task in the running process instance. The last line confirms that one user task is available for claim for the user kermit. Because a user task is created, you should be able to see this task in the Activiti Explorer. Confirm that Tomcat has been started (ant tomcat.start or ant demo.start).
Now, point your browser to http://localhost:8080/activiti-explorer and log in with the user kermit and the same password. When you click on the link Queued, you should see one task in the group Sales. When you click on this Sales group, you should see a screen with one user task with the name of Work on Order like the screenshot shown in figure 1.5.
For the sake of completeness, you can claim the user task and see that it becomes available in the Inbox page. There you can complete the task, which triggers the process instance to complete to the end state. But, before you do that, you can click on the process link, Part of process: ‘Order Book’, to see details about the running process instance, as shown in figure 1.6.
In the process instance overview, you can get the details about the user tasks that aren’t yet completed and the process variables of the running instance. The Activiti
Implementing your first process in Activiti
Figure 1.5 A screenshot of the Activiti Explorer showing the user task of the book order process.
Figure 1.6 A screenshot of the Activiti Explorer application showing the details of a running process instance with open user tasks and the process instance variables.
Explorer contains a lot more functionality, which we’ll discuss throughout the book, starting in chapter 3.
This completes our first journey in the Activiti framework. In the coming chapters, we’ll take a more detailed look at the Activiti tool stack and explore how to use Activiti’s Java API to, for example, create processes or retrieve management information. But, first, we’ll look more closely at BPMN 2.0.
1.5 **Summary**
In this chapter, we started with an introduction into Activiti, including its history and its competitors. We also got acquainted with the Activiti tool stack and you were able to implement a simple book order process using a script and user task. You also started the Activiti process engine, deployed a book order process, started a process instance, and did some unit testing on it with a couple lines of Java code.
It’s obvious that Activiti provides you with a powerful API and tool stack to run your processes. But how can you model and implement these processes? The BPMN 2.0 specification is the foundation for the Activiti Engine, and, to prepare for the examples in the rest of the book, we’ll discuss the details of BPMN 2.0 in the next chapter.
Activiti streamlines the implementation of your business processes: with Activiti Designer you draw your business process using BPMN. Its XML output goes to the Activiti Engine which then creates the web forms and performs the communications that implement your process. It’s as simple as that. Activiti is lightweight, integrates seamlessly with standard frameworks, and includes easy-to-use design and management tools.
Activiti in Action introduces developers to business process modeling with Activiti. You’ll start by exploring BPMN 2.0 from a developer’s perspective. Then, you’ll quickly move to examples that show you how to implement processes with Activiti. You’ll dive into key areas of process modeling, including workflow, ESB usage, process monitoring, event handling, business rule engines, and document management integration.
What’s Inside
- Activiti from the ground up
- Dozens of real-world examples
- Integrate with standard Java tooling
Written for business application developers. Familiarity with Java and BPMN is helpful but not required.
Tijs Rademakers is a senior software engineer specializing in open source BPM, lead developer of Activiti Designer, and member of the core Activiti development team. He’s the coauthor of Manning’s Open Source ESBs in Action.
To download their free eBook in PDF, ePub and Kindle formats, owners of this book should visit manning.com/ActivitiinAction
“A comprehensive overview of the Activiti framework, the Activiti Engine, and BPMN.”
—From the Foreword by Tom Baeyens, Founder of jBPM
—From the Foreword by Joram Barrez, Cofounder of Activiti
“The very first book on Activiti ... immediately sets the bar high.”
—Roy Prins, CIBER Netherlands
“Just enough theory to let you get right down to coding.”
—Gil Goldman
Dalet Digital Media Systems
$49.99 / Can $52.99 [INCLUDING eBook]
|
{"Source-Url": "https://manning-content.s3.amazonaws.com/download/a/f9ee70f-8796-49ca-bc26-c34806285c4c/ActivitiSampleCh1.pdf", "len_cl100k_base": 8997, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 44462, "total-output-tokens": 9766, "length": "2e13", "weborganizer": {"__label__adult": 0.00024962425231933594, "__label__art_design": 0.00019228458404541016, "__label__crime_law": 0.00016605854034423828, "__label__education_jobs": 0.0004763603210449219, "__label__entertainment": 4.51207160949707e-05, "__label__fashion_beauty": 8.52346420288086e-05, "__label__finance_business": 0.0002994537353515625, "__label__food_dining": 0.00021255016326904297, "__label__games": 0.0002961158752441406, "__label__hardware": 0.0003376007080078125, "__label__health": 0.00015652179718017578, "__label__history": 0.00010788440704345704, "__label__home_hobbies": 4.4405460357666016e-05, "__label__industrial": 0.00017654895782470703, "__label__literature": 0.0001461505889892578, "__label__politics": 0.00011134147644042967, "__label__religion": 0.00019943714141845703, "__label__science_tech": 0.0018825531005859375, "__label__social_life": 6.270408630371094e-05, "__label__software": 0.009429931640625, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.00015878677368164062, "__label__transportation": 0.00019860267639160156, "__label__travel": 0.00011914968490600586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40198, 0.01795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40198, 0.34588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40198, 0.88621]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1345, false], [1345, 1564, null], [1564, 2139, null], [2139, 2139, null], [2139, 2991, null], [2991, 5368, null], [5368, 8546, null], [8546, 11612, null], [11612, 13316, null], [13316, 16377, null], [16377, 18954, null], [18954, 21794, null], [21794, 23675, null], [23675, 26541, null], [26541, 29014, null], [29014, 31779, null], [31779, 34056, null], [34056, 36768, null], [36768, 37495, null], [37495, 38270, null], [38270, 40198, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1345, true], [1345, 1564, null], [1564, 2139, null], [2139, 2139, null], [2139, 2991, null], [2991, 5368, null], [5368, 8546, null], [8546, 11612, null], [11612, 13316, null], [13316, 16377, null], [16377, 18954, null], [18954, 21794, null], [21794, 23675, null], [23675, 26541, null], [26541, 29014, null], [29014, 31779, null], [31779, 34056, null], [34056, 36768, null], [36768, 37495, null], [37495, 38270, null], [38270, 40198, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40198, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40198, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1345, 3], [1345, 1564, 4], [1564, 2139, 5], [2139, 2139, 6], [2139, 2991, 7], [2991, 5368, 8], [5368, 8546, 9], [8546, 11612, 10], [11612, 13316, 11], [13316, 16377, 12], [16377, 18954, 13], [18954, 21794, 14], [21794, 23675, 15], [23675, 26541, 16], [26541, 29014, 17], [29014, 31779, 18], [31779, 34056, 19], [34056, 36768, 20], [36768, 37495, 21], [37495, 38270, 22], [38270, 40198, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40198, 0.13187]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
50ddb1d63b5c390a33e1a8da368543320bb4800c
|
Proceedings of the Fifth International Conference on Graph Transformation - Doctoral Symposium (ICGT-DS 2010)
Realizing Impure Functions in Interaction Nets
Eugen Jiresch
17 pages
Realizing Impure Functions in Interaction Nets
Eugen Jiresch
Institute for Computer Languages
Vienna University of Technology
Abstract: We propose and illustrate first steps towards an extension of interaction nets based on monads to handle functions with side effects (e.g., I/O, exceptions). We define three monads for common types of side effects and show their correctness by proving the monad laws.
Keywords: interaction nets, side effects, monads
1 Introduction and Overview
Programming languages are the key to use computational resources efficiently. To ensure correctness and productivity of programs, formal verification of software has become increasingly important in the last years. Hence, programming languages based on rigorous formal models are indispensable. A prominent example is Haskell, a functional programming language based on the λ-calculus.
Interaction nets (INs for short) are a programming paradigm based on graph rewriting. One main idea of IN programs is to represent programs as graphs (nets). Their execution is modeled by rewriting the graph based on specific node (agent) replacement rules. This simple system is able to model both high- and low-level aspects of computation. The theory behind interaction nets is well developed. They enjoy several useful properties such as strong confluence and locality of reduction. These ensure that single computational steps in a net do not interfere with each other, and thus may be executed in parallel. Another important aspect is that interaction nets share computations: reducible expressions cannot be duplicated, which is beneficial for efficiency in computations.
While the properties above demonstrate great potential for interaction nets, the existing prototype languages based on interaction nets lack important features for practical use, such as input/output functionality, state manipulation or exception handling. Such features of interaction with the real world are often considered impure, as opposed to pure (mathematical) functions which do not incorporate side effects. Interaction net programs (or systems) can be viewed as a pure language: the reduction of a net is not influenced by anything but its initial state.
Nowadays, software frequently interacts with the real world. Impure functions are considered a vital part of programs, hence pure languages need to incorporate them. Thus, they require an appropriate interface to deal with impure effects.
To combine pure functions and the outside world in this fashion, we adapt a monad based model for interaction nets. Monads are a framework to structure computation that has been
* The author was supported by the Austrian Academy of Sciences (ÖAW) under grant no. 22932 and by the Vienna PhD School of Informatics.
used e.g. in Haskell with great success. However, there are two main obstacles when adapting monads to interaction nets: First, interaction nets need to support abstract datatypes in order to implement monads in a general way. Existing type systems ([2, 3]) for interaction nets are lacking this feature. Second, monadic functions have a distinct higher-order character. Even though interaction nets seem well-suited to incorporate higher-order functions (both data and computation are represented as nets and treated equally), as far as we know no model for higher-order interaction nets exists to date.
In this paper, we present three ad-hoc solutions to typical side effect computations in interaction nets based on monads. We show that these examples satisfy the monad laws. We then discuss the obstacles towards a more general side effect model based on the monad framework and present ideas on how to overcome them, which is currently work-in-progress. To summarize, the main contributions of the paper are:
- The definition of three monads, Maybe, Writer and List in interaction nets.
- A proof that the monad laws are satisfied by our implementation.
- An assessment of the remaining challenges towards a general solution.
This paper is organised as follows: The remainder of this section gives a short introduction to interaction nets. In Section 2, we discuss computational side effects and the challenges they represent for pure languages. In Section 3, we discuss our approach to handle side effects in interaction nets. We develop three examples of monadic side effect handling and show their correctness. In Section 4, we discuss open problems - appropriately extending existing interaction nets type systems and handling rules with arbitrary/variable agents. Finally, we present a conclusion and give an outlook on further research.
1.1 Interaction nets
Interaction nets have been introduced in [9]. A net is a graph consisting of agents (nodes) and ports (edges). Computation is modeled by rewriting the graph, which is based on interaction rules. These rules apply to two nodes which are connected by their principal ports (denoted by the arrow). We refer to these nodes as active pair or redex. Interaction rules preserve the interface of the net: no auxiliary ports are added or removed.
For example, the following rules model the addition of natural numbers (encoded by 0 and a successor function S):
This simple system allows for parallel evaluation of programs: If more than one interaction rule is applicable at the same time, they can be applied in parallel without interfering with each other. In addition, active pairs in a net cannot be duplicated: They are evaluated only once, which allows for sharing of computation. Only parts of a net without active pairs (i.e., in normal form) can be duplicated.
However, functions that incorporate side effects such as I/O or exception handling generally destroy the above properties. Yet, these impure functions are a vital part of any programming language. Our goal is to extend interaction nets in order to support computations with side effects.
2 Computational Side Effects and Impure Functions
A pure (mathematical) function’s result is determined only by its input parameters. While this definition may seem trivial at first glance, many functions and procedures do not qualify as pure functions. The state of the machine executing the program and interaction with the user or other devices can have considerable impact on a function’s result. Impure functions can be influenced by computational side effects (such as a change in machine state) or cause side effects themselves in addition to their return value.
To elaborate on this difference, consider for example the following function:
```python
def square(x):
global y
y = y+1
return x*x
```
The function `square` computes the square of its input parameter and returns it as a result. In addition to this, it accesses a global variable `y` (being a part of the state of the program) and increments it. This is a typical example of a computational side effect that occurs next to the "actual" result of the function.
However, we can define a pure function that has almost the same behaviour as `square` by adding additional parameters:
```python
def square_inc(x, y):
return (x', y') where
x' = x*x
y' = y+1
```
The second argument of `square_inc` replaces the global variable `y`. Furthermore, the function returns a pair consisting of the square of `x` and the incremented `y`. This way, its behaviour (i.e., its result) is determined only by its input parameters.
Programs that consist only of pure functions have a clear advantage: They allow for equational reasoning - also referred to as “substituting equals for equals for equals” - which is a fundamental principle of mathematics. It allows one to state properties of programs such as their correctness more easily, which is much more difficult in the presence of side effects. Therefore, it is advantageous to encode side effects with additional parameters. However, doing so essentially transfers the problem to the programmer: handling additional arguments and “plumbing” them through the flow of the program is tedious and error-prone. A general and abstract solution to handle computational side effects in a pure environment is therefore needed.
3 A Monad Approach to Side Effects
We base our solution to impure functions in interaction nets on monads [11]. Monads are a model to structure computation. They have been used with great success in functional programming: monadic functions can encode side effects in a pure language and determine the order in which they occur [13].
Formally, a monad consists of an abstract datatype $M a$ and two (higher-order) functions operating on this type:
\[
\begin{align*}
\text{data } & M a \\
\text{return} & : a \rightarrow M a \\
\text{bind} & : M (a \rightarrow M b) \rightarrow M b \\
\end{align*}
\]
Monads are used to augment values of some type with computations that contain potential side effects. Intuitively, a monad does the following:
- $M$ adds a sort of wrapper to some value $x$ of type $a$, potentially containing additional data or functions.
- $\text{return } x$ wraps a value $x$ without any additional computations.
- $\text{bind}$ handles sequentialization or ordering of function applications and their potential side effects.
A monad needs to satisfy the following laws:
1. \( \text{return } a \triangleright= f = f a \)
2. \( m \triangleright= \text{return} = m \)
3. \( (m \triangleright= f) \triangleright= g = m \triangleright= (\lambda x \rightarrow (f x \triangleright= g)) \)
Intuitively, $\text{return}$ performs no side effects. Law (1) states that if $\text{return } a$ is the first argument of $\text{bind}$, its result should be equal to the application of its second argument to $a$ (without any side effects). According to law (2), $\text{return}$ also acts as a right neutral element for $\text{bind}$. In turn, $\text{bind}$ has a property that is similar to associativity. This is expressed by law (3).
Interaction nets as such do not support abstract datatypes or higher-order functions, which are essential ingredients of monads. This, together with the restricted shape of rules, makes an
---
1 The monad functions and laws are expressed in a syntax similar to Haskell.
adaptation non-trivial. However, we can define a monad just by the basic features of interaction nets, agents and rules. We will use the following agents to model \textit{return} and \textit{bind}:
\[
\text{ret} \quad \text{and} \quad \text{>>=}
\]
The argument of \textit{return} (a base value) is connected to \textit{ret}'s principal port. The principal port of \textit{>>=} is connected to a wrapped value. Its auxiliary port is connected to an agent representing \textit{bind}'s second argument, a function.
While \textit{bind} may be modeled differently (e.g., more auxiliary ports), this approach captures the essence of monadic side effect handling in a natural and intuitive way. Moreover, it allows us to conveniently show one of the monad laws in the next subsection.
### 3.1 The Maybe Monad
We illustrate the basic idea and approach by a typical example, namely the monad \textit{Maybe} which is used for exception handling:
\[
data \text{Maybe} a = \text{Just} a \mid \text{Nothing}
\]
(1) \text{\textit{return} x} = \text{Just} x
(2) \text{(\textit{Just} x) \text{>>=} f} = f x
(3) \text{\textit{Nothing} \text{>>=} f} = \text{\text{Nothing}}
The idea behind the maybe monad is simple: Any function capable of exception handling either returns a regular value (\text{Just} a) or a special value denoting an exception (Nothing). \textit{bind} is used to concatenate two functions of this form: If the result of the first function is a regular value, it is simply passed to the second function. If the result of the first function is Nothing, \textit{bind} also returns Nothing, while effectively discarding its second argument. The \textit{Maybe} monad satisfies the monad laws. This can easily be verified by substituting the variables of the equations with Nothing or Just values.
The following rules model the Maybe monad for functions \textit{f} on natural numbers:
(1a) \text{0} \text{\textit{ret} \text{\Rightarrow} 0 \text{\textit{Jst}}} \\
(1b) \text{S} \text{\textit{ret} \text{\Rightarrow} S \text{\textit{Jst}}} \\
(2) \text{\textit{Jst} \text{\textit{>>=} \Rightarrow}} \\
(3a) \text{\textit{No} \text{\text{\textit{>>=} \Rightarrow} aux}} \\
(3b) \text{\textit{aux \textit{f} \Rightarrow No}}
The correspondence of interaction rules and original definition of the Maybe monad is indicated by the rule labels. Note that rule (3) of the Maybe monad is split in two interaction rules, introducing an auxiliary agent. The reason for this is that the left-hand side (LHS) of an interaction rule may only consist of two agents. However, the LHS of rule (3) consists of three
non-variable symbols (Nothing, f, >>>). This requires the rule to be split in two. The restriction of two agents per rule LHS can be relaxed, as we will see in the writer or list monad examples. However, for the sake of clarity, we define the maybe monad with ordinary interaction rules only (i.e., two agents per LHS).
### 3.1.1 Correctness of the Maybe Monad
The interaction rules above satisfy the monad laws, which can be shown by reduction sequences of nets. We write $\Rightarrow$ for a reduction step by applying rule $r$. The following sequence proves law (1) $\text{return } a >> f = f a$:
\[
\begin{array}{c}
0 \rightsquigarrow \text{ret} \rightarrow f \\
\Rightarrow \text{1a} \hspace{1cm} 0 \rightsquigarrow \text{Jst} \rightarrow f \\
\Rightarrow \text{2a} \hspace{1cm} 0 \rightsquigarrow f
\end{array}
\]
We only show the case $a = 0$. For $a > 0$, law (1) can be shown in the same way. Law (2) is proved by the following sequence: (for $m = \text{Just } 0$)
\[
\begin{array}{c}
0 \rightsquigarrow \text{Jst} \rightarrow \text{ret} \\
\Rightarrow \text{2a} \hspace{1cm} 0 \rightsquigarrow \text{ret} \\
\Rightarrow \text{1a} \hspace{1cm} 0 \rightsquigarrow \text{Jst}
\end{array}
\]
Again, the case $m = \text{Just } (S x)$ is proved similarly. Note that the case $m = \text{Nothing}$ follows trivially from the definition of rules (3a) and (3b).
We can show the monad law (3) in a more general way. In fact, the following proof will hold for all three monads defined in this paper. Our main argument is that if an agent with one auxiliary port is used to model $\text{bind}$, law (3) is always true.
**Proposition 3.1.2** If an agent with only one auxiliary port is used to encode $>>=$ in interaction nets, then the monad law (3) $m >>= f >>= g = m >>= (\lambda x -> f x >>= g)$ holds.
**Proof.** We use an agent with one auxiliary port to model $>>= $. This way, the interaction net representation of each side of the above equation is the same, namely (the arbitrary net $M$ is denoted by a dashed square):
\[
\begin{array}{c}
\tilde{M} \rightarrow f \rightarrow g
\end{array}
\]
Therefore, it is trivially true.
So why do the interaction rules above constitute a monad? Two reasons can be given: First, we have defined agents and rules that have the same functionality as the Maybe monad: The agents Jst and No model the abstract datatype Maybe a. The agents ret and >>= have a behaviour equivalent to the original monadic operators. Therefore, these rules can effectively be used to model exception handling in interaction nets. Second, as we have shown above, the monad laws hold.
3.2 The Writer Monad
The Writer monad adds the possibility for functions to have a secondary, optional output. This output is accumulated through the evaluation of the program, combining the result of individual functions. As the name suggests, the Writer monad generalizes logging or debugging output. In addition to a program’s result or actions, it provides information on how the result was achieved, whether any errors occurred or how much time the computation took.
The definition of the Writer monad builds on a type $S$ used for the secondary output and a function combining values of this type, which we denote by $\ast$. In order to satisfy the monad laws, it is required that $S$ and $\ast$ form a monoid. Most commonly, the type string (with the empty string as the identity element) and a function for string concatenation is used, which clearly satisfies the monoid laws.
Formally, we define the Writer monad as follows, where $e$ denotes the identity element of $S$:
\[
\text{data} \quad \text{Log} \ a \ = \ (a, S) \\
(1) \quad \text{return} \ x \ = \ (x, e) \\
(2) \quad (x, s) \ast f \ = \ (y, s \ast s') \quad \text{where} \quad (y, s') \ = \ f \ x
\]
The monadic type $Log$ is simply a pair of the base type $a$ and the output type $S$. Here, $\text{return}$ simply yields a pair of its argument and the identity element of $S$, $\text{bind}$ applies $f$ to $x$ and returns a pair of the primary result of $f$ and a combination of $f$’s secondary and previous outputs.
3.2.1 Adaptation to Interaction Nets using Nested Patterns
As with the Maybe monad, we restrict ourselves to the definition of monadic rules for concrete types. As primary return type $a$, we again use symbolic natural numbers. For the sake of simplicity, we use a list of booleans as secondary output type and regular list concatenation as combining function (denoted by $\ast$). The agent $log$ models the pair of return type (connected to the left auxiliary port) and secondary output (connected to the right auxiliary port). The agents $Cons$ and $Nil$ are used to express lists.
The interaction rules for the Writer monad are similar to the definition above. In the rule for $\text{bind}$, we use the agent $\text{ext}$ to “extract” both values of the $log$ pair that is returned by the generic agent $\alpha$.
\[
(1) \quad \text{ret} \ast S \ x \quad \Rightarrow \quad \text{log} \ S \ x
\]
Realizing Impure Functions in Interaction Nets
Note that rule (3) (modelling bind) has three agents in its LHS. This clearly violates the restriction of two agents per LHS. However, this restriction can be overcome in the form of nested patterns. Nested patterns first appeared in [5]. We now recall the main definition of nested patterns and the conditions for preserving strong confluence.
**Definition 3.2.1 (Nested active pair [5])** A nested active pair is defined as follows:
- a regular active pair is a nested active pair, represented textually as \(\langle \alpha(x_1, \ldots, x_n) \triangleright \triangleright \beta(y_1, \ldots, y_m) \rangle\)
\(x_i\) and \(y_j\) represent auxiliary ports).
- if \(P\) is a nested active pair, connecting the principal port of some other agent to some auxiliary port \(y_j\) of \(P\) also yields an active pair. We represent this textually as \(\langle P, y_j - \gamma(z_1, \ldots, z_n) \rangle\).
The framework of interaction rules with nested active pairs is called INP (Interaction nets with Nested Patterns). Since rules in INP may overlap due to the more complex LHS patterns, a condition to preserve strong confluence is introduced.
**Definition 3.2.2 (Sequential set property [5])** A set of nested active pairs \(N\) is sequential if and only if, when \(\langle P, y_j - \gamma(z_1, \ldots, z_n) \rangle \in N\), then
- for the nested pair \(P, P \in N\)
- for all free ports \(y\) in \(P\) except \(y_j\) and for all agents \(\alpha\), \(\langle P, y - \alpha(z_1, \ldots, z_n) \rangle \notin N\)
Intuitively, nested pairs in a sequential set do not overlap unless one is a subnet of another.
**Definition 3.2.3 ([5])** A set of rules \(R\) is well-formed if and only if
- there is a sequential set which contains every nested active pair of all LHSs in \(R\)
- for every rule \(P \rightarrow N \in R\), there is no rule \(P' \rightarrow N' \in R\) such that \(P'\) is a subnet of \(P\).
Well-formedness of a set of rules is sufficient for strong confluence:\(^2\)
---
\(^2\) More precisely, by strong confluence we here mean the property: Whenever \(M \Leftrightarrow N \Rightarrow P\) with \(M \neq P\), then there exists a net \(Q\) s.t. \(M \Rightarrow Q \Rightarrow P\)
Proposition 3.2.4 ([5]) If a set of rules \( R \) in INP is well-formed, then the reduction relation induced by \( R \) is strongly confluent.
Using Proposition 3.2.4, we show that the rules for the Writer monad are strongly confluent.
Proposition 3.2.5 The set of rules of the Writer monad is strongly confluent.
Proof. The set of rules is well-formed: The active pairs of all LHSs are distinct. Therefore, no LHS is a subnet of another, and all LHSs can be added to a sequential set. Hence, by Proposition 3.2.4, the reduction using the rules of the Writer monad is strongly confluent.
To handle the access to the elements of the log pair and concatenation of lists, we define the following auxiliary rules.
\[
\begin{align*}
(\text{ext}) & : \quad \text{log} \xrightarrow{r_1} \text{ext} \xrightarrow{r_2} \Rightarrow x \quad \text{r}_1 \\
(++) & : \quad \text{cons} \xrightarrow{z} \Rightarrow x \quad y \\
& \quad \text{cons} \xrightarrow{r} \Rightarrow x \\
& \quad \text{cons} \xrightarrow{z} \Rightarrow y \\
& \quad \text{cons} \xrightarrow{r} \Rightarrow r \\
& \quad \text{nil} \xrightarrow{r} \Rightarrow x
\end{align*}
\]
The \text{ext} agent extracts both elements of the log pair and returns it via its two ports \text{r}_1 and \text{r}_2. ++ performs the basic concatenation of two lists.
3.2.2 Correctness of the Writer Monad
Similar to the maybe monad, we can show the correctness of the writer monad by reduction sequences of the terms on each side of the respective monad law. We begin with law (1), \text{return} \( a >>= f = f \ a \). Let \( a, b \) be arbitrary but fixed natural numbers such that \( f \ a = (b, s) \) (where \( s \) is the log information of \( f \)). We can then show that the interaction net encoding of \text{return} \( a >>= f \) reduces to the one of \((b, s)\), namely
Realizing Impure Functions in Interaction Nets
The arbitrary nets $a$, $b$ and $s$ are represented by dashed squares:

The first step uses rule (1) or (2) depending on the concrete instance of $a$ (0 or $S(x)$).
The following reduction sequence shows law (2), $(m >> return) = m$ (where $m = (a,s)$):

In the last step, the concatenation rule is applied multiple times (denoted by $\Rightarrow^*$) until a normal form is reached. It follows from the definition of the (++ rules that $s ++ \ [ \ ]$ reduces to $s$.
As we use the same modeling approach as with the Maybe monad, the third monad law holds due to Proposition 3.1.2.
3.3 The List Monad
The List monad is used to sequentialize functions that take a single value as input and return an ordered sequence of values as output. For example, consider two functions $f :: a \rightarrow [b]$
and \( g :: b \rightarrow [c] \), where \([t]\) denotes a list of type \(t\). As each function takes just a single argument but returns a list, the operators of the list monad operators are used to concatenate them. In essence, the function \( g \) is simply applied to all results of \( f \), combining all results of \( g \)'s applications in a single list.
In Haskell, the List monad is defined as follows:
\[
\text{data List } a = [a] \\
(1) \quad \text{return } x = [x] \\
(2) \quad xs >>= f = \text{concat} (\text{map } f \xs)
\]
Here, \textit{return} does nothing but wrapping a value in a singleton list. Furthermore, \( >>= \) applies the function \( f \) to every element of the input list and concatenates all resulting lists into a single list. \textit{concat} and \textit{map} are two well-known operators in functional programming: \textit{concat} simply merges a list of lists into a single list; \textit{map} has two arguments, a function and a list; \textit{map} applies the function to every element of the list and returns the list of results.
### 3.3.1 Adaptation to Interaction Nets using Nested Patterns
Instead of a generic type \( a \), we allow only symbolic natural numbers and lists in the interaction net setting. We use the same agents to represent lists as we did for the Writer monad.
\[
\begin{align*}
(1a) \quad \text{\texttt{ret}} \quad &\Rightarrow \quad \text{S} \quad x \quad \Rightarrow \quad \text{cons} \quad S \quad \text{nil} \\
(1b) \quad \text{\texttt{ret}} \quad &\Rightarrow \quad \text{S} \quad 0 \quad \Rightarrow \quad \text{cons} \quad 0 \quad \text{nil} \\
(1c) \quad \text{\texttt{ret}} \quad &\Rightarrow \quad \text{S} \quad \text{nil} \quad \Rightarrow \quad \text{cons} \quad \text{nil} \quad \text{nil} \\
(1d) \quad \text{\texttt{ret}} \quad &\Rightarrow \quad \text{S} \quad \text{Cons} \quad x \quad \Rightarrow \quad \text{cons} \quad \text{cons} \quad x \quad \text{nil} \quad \text{xs} \\
(2) \quad \text{\texttt{cons}} \quad \Rightarrow \quad \text{S} \quad \text{\texttt{Cons}} \quad x \quad \Rightarrow \quad \text{cons} \quad \text{cons} \quad x \quad \text{cat} \quad r \\
(3) \quad \text{\texttt{nil}} \quad \Rightarrow \quad \text{S} \quad \text{\texttt{Cons}} \quad x \quad \Rightarrow \quad \text{cons} \quad \text{cons} \quad x \quad \text{cat} \quad r \\
\end{align*}
\]
\(11 / 17\) Volume 38 (2010)
The rules for return are obviously very similar to the corresponding rules of the Writer monad. Instead of a pair of a value and an empty list, a singleton list is returned. The interaction rule for \( \triangleright\triangleright\triangleright \) behaves analogously to the function definition in Haskell. Rule (3) models the behavior of bind when connected to an empty list. In this case, the empty list is simply propagated and the agent \( \alpha \) is discarded.
In rule (2), the auxiliary agents map and cat are used. Their corresponding interaction rules are defined as follows:
\[
\begin{align*}
\text{(map)} & \quad \begin{array}{c}
\text{map} \leftarrow \alpha \quad \text{r} \\
\text{cons} \leftarrow \text{xs}
\end{array} \\
& \Rightarrow \\
& \begin{array}{c}
\text{cons} \leftarrow \text{r} \\
\text{xs} \leftarrow \text{map} \leftarrow \alpha
\end{array} \\
\text{(cat)} & \quad \begin{array}{c}
\text{cat} \leftarrow \text{r} \\
\text{nil} \leftarrow \text{xs}
\end{array} \\
& \Rightarrow \\
& \begin{array}{c}
\text{cat} \leftarrow \text{r} \\
\text{xs} \leftarrow \text{nil}
\end{array}
\]
The interaction rules for map have a very similar shape as the ones for bind. The second argument of map (an arbitrary function represented by the agent \( \alpha \)) is connected to its auxiliary port. \( \alpha \) is applied to the first element of the list and map is connected to the remainder of the list. The concatenate function, represented by cat, flattens a list of lists. The elements of each list are appended to a single list one by one.
**Proposition 3.3.1** Let \( R \) be the set of interaction rules consisting of the List monad, map and cat rules. The reduction relation induced by \( R \) is strongly confluent.
**Proof.** \( R \) is well-formed: First, no LHS is a subnet of another. Second, all LHSs can be added to a sequential set. Note that only two rules share the same active pair (the second and third rule of (cat)). However, these LHSs can be added to a sequential set as the respective nested agent (nil and cons) is connected to the same port. This satisfies the sequential set condition. Hence, by
Proposition 3.2.4, the reduction relation induced by \( R \) is strongly confluent.
In addition, we show that all three monads can be added to a single ruleset without losing strong confluence. However, it is important to rename the monadic operators (and auxiliary functions) of each monad: without a suitable type system (see Section 4.2), we are unable to distinguish between the monadic operators of different monads.
**Proposition 3.3.2** Let \( R \) be the set of interaction rules consisting of the Maybe, Writer and List monad including auxiliary agents, where the agents \( \text{ret} \) and \( >>= \) are renamed for each monad. The reduction relation induced by \( R \) is strongly confluent.
**Proof.** \( R \) is well-formed: No LHS is a subnet of another. In addition, all LHSs can be added to a sequential set: As shown before, the rules of the Writer and List monad can be added to a sequential set individually. The Maybe monad satisfies strong confluence as it consists of ordinary rules only. Furthermore, no two rules of different monads share the same active pair (as \( \text{ret} \) and \( >>= \) have been renamed). Therefore, \( R \) satisfies the sequential set condition. By Proposition 3.2.4, the reduction relation induced \( R \) is strongly confluent.
Note that if the system consisting of these monads is added to other interaction rule sets, it is again required to verify well-formedness w.r.t. nested patterns for the resulting set of rules.
### 3.3.2 Correctness of the List Monad
We show the correctness of the List monad laws analogously to the previous examples. As for the Writer monad, we use arbitrary but fixed nets (denoted by dashed squares) in the reduction sequences showing the monad laws. Let \( a,b,bs \) be arbitrary nets such that \( f a = \text{Cons} \ b \ bs \). The following reduction shows law (1), return \( a >>= f = f a \):
![Reduction Diagram]
\[
\begin{align*}
\text{Nil} & \quad \Rightarrow^1 \quad \text{Nil} \\
\text{Nil} & \quad \Rightarrow^2 \quad \text{Nil}
\end{align*}
\]
The last step is achieved by multiple applications of the (cat) rules (one for each element in \( bs \)). Law (2), \((Cons \ a \ as >> return) = (Cons \ a \ as)\) can be proved by the following reduction:
Again, the final step consists of several reductions. It follows from the definition of map, cat and return that cat (map return as) = as. In essence, every element in as is put in a singleton list. The resulting nested list is flattened again by cat.
As with the previous monads, law 3 holds due to Proposition 3.1.2.
4 Challenges
In this section, we discuss the main obstacles towards realizing a general monad framework for interaction nets.
4.1 Interaction Rules with Generic Agents
Monads have a distinct higher-order character. The second argument of the bind operator is a function itself. Furthermore, the monad laws contain variables that represent arbitrary functions. It is clear that interaction nets need to support these features.
In general, interaction nets seem well-suited for higher-order computations: data and functions are treated alike, as both are represented by agents. However, patterns of interaction rules consist only of specific agents. There are no variable or generic agents that would allow a specific agent
to interact with any agent in the same way. The exception to this rule are the agents $\epsilon$ and $\delta$. These agents are frequently used in the literature (e.g., [10]) to model explicit duplication and deletion of agents. In fact, $\epsilon$ and $\delta$ interact in the same way with any agent, they delete (respectively, duplicate) the agent and propagate themselves to every auxiliary port:
$$\delta \Rightarrow \alpha \ldots \alpha \ldots \delta \ldots \delta \ldots \alpha \ldots \alpha$$
$$\epsilon \Rightarrow \delta \ldots \delta \ldots \epsilon \ldots \epsilon \ldots \delta \ldots \delta \ldots \epsilon \ldots \epsilon$$
In the same sense, we used $\alpha$ representing an arbitrary agent to specify some of the monadic rules in this paper. While $\epsilon$ and $\delta$ are frequently used, there is currently little theory on rules with variable or generic agents - these rules are usually just stated. Furthermore, no existing interaction nets implementation supports such rules in a general way.
We argue that this deficiency needs to be addressed in order to achieve a general and abstract framework of monads in interaction nets. For example, introducing generic rules yields several new possibilities of rule overlaps, i.e., more than one interaction rule matching a given active pair. Rule overlaps are not allowed in interaction nets by definition, and are very likely to destroy the confluence property. Therefore, a precise semantics of generic interaction rules needs to be defined, including constraints on their usage.
### 4.2 Type Systems
The second obstacle towards a general monad framework concerns the typing of interaction nets. Most of the monadic interaction rules in this paper are only defined for symbolic natural numbers, i.e., when interacting with the agents $S$ or $0$. Clearly, these rules should be defined for a range of types to represent a general solution. In particular, monads are usually defined for abstract data types with type variables, such as $\text{Maybe } a$ or $\text{List } a$. Therefore, a suitable type system needs to support polymorphic types.
Several different type systems for interaction nets exist. For example, a basic system was defined by Lafont in the first paper on interaction nets [9]. The general idea behind it is to assign a type and a polarity (i.e., input or output) to every port of an agent. Fernandez extended Lafont’s approach by defining an intersection type system for interaction nets [2]. It features type variables (supporting a form of polymorphism) and means to construct more complicated types (intersection types, arrow types). This system offers two features that may be useful to type monadic interaction rules: type variables and arrow types to denote functional types. However, it lacks abstract types with type variables, which are needed to specify the example data structures above. We are currently investigating how to extend Fernandez’ type system with this feature.
The problem of type systems is connected to the problem of generic rules: Both deal with the
specification of interaction rules for a general range of agents. Obviously, type assignment to agents could prevent several cases of overlaps that may arise from the use of rules with generic agents. For example, just an assignment of polarity (as mentioned above) to the ports of every agent can eliminate overlaps between multiple generic rules. Further research is needed for a concrete model to support generic rules with an appropriate type system.
With regard to existing implementations, to the best of our knowledge no system based on interaction nets features the typing of agents and ports. Future work may include implementing such a type system in the programming language inets [6]. Currently, inets only supports external datatypes. These are values of a basic type (int, char, float, . . . ) that can be attached to agents (for details, see [3]). However, the agents themselves (and their ports) do not have a type.
5 Conclusion
In this paper, we presented first steps of a new approach to handle side effects in interaction nets. This approach is based on monads, particularly on their usage in the functional language Haskell. We presented three different ad-hoc solutions for impure functions in interaction nets. We have shown that the monad laws hold for these solutions. This is an important step towards realizing impure functions. The presented monads act as a proof of concept. Furthermore, we have shown a more general result regarding the monad laws in interaction nets in Proposition 3.1.2. This result is tied to our modelling approach of the return and bind operators, which captures the essence of monads in a natural and intuitive way. Other modelling approaches using different agents/rules are currently under investigation.
Monads were originally defined in Category theory [11]. In this paper, we only considered the notion of a monad in the setting of functional programming. An approach that focuses more on category theory may offer a better level of abstraction to define a monad framework in interaction nets, and will be investigated as part of future work.
The similarity of the realization of the three monads is obvious. The same agents are used, and the rules work in a similar fashion. Our goal is to generalize these solutions to an abstract framework of handling side effects in interaction nets. In this paper, we discussed the two main obstacles towards such a framework: interaction rules with generic agents and a type system extension to appropriately specify monadic data structures and operators. Both theoretical results on and implementation of these features are subject of future work. Currently, we are working on a prototype implementation of generic interaction rules in the programming language inets [6] (including an underlying mechanism to perform I/O such as the ccall interface in [7]) and an extension of Fernandez’ intersection type system for interaction nets [2].
Acknowledgements: We are grateful to the anonymous reviewers for their useful hints and suggestions.
5.1 Related Work
Extensions to interaction nets have been proposed in many directions. For example, nested pattern matching, an approach for allowing more complex rule patterns is developed in [4, 5]. Nested patterns relax the restriction of having only two agents in the left-hand side of a rule
while preserving the beneficial properties of interaction nets. Several monad rules in this paper make use of nested patterns. For example, rule (3) of the writer monad has a nested pattern.
Monads have been applied to various formalisms. A recent application outside of functional programming can be found in [8], where monads are used to structure mechanisms in interactive theorem provers. Several evaluators for interaction nets have been implemented, cf. for example [1, 12]. To the best of our knowledge, no system deals with impure functions in an appropriate way.
Bibliography
|
{"Source-Url": "https://publik.tuwien.ac.at/files/PubDat_206728.pdf", "len_cl100k_base": 9111, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 44466, "total-output-tokens": 10791, "length": "2e13", "weborganizer": {"__label__adult": 0.0004456043243408203, "__label__art_design": 0.0003414154052734375, "__label__crime_law": 0.0003955364227294922, "__label__education_jobs": 0.0008187294006347656, "__label__entertainment": 8.541345596313477e-05, "__label__fashion_beauty": 0.00018918514251708984, "__label__finance_business": 0.0002005100250244141, "__label__food_dining": 0.00048065185546875, "__label__games": 0.0005698204040527344, "__label__hardware": 0.0007734298706054688, "__label__health": 0.0007929801940917969, "__label__history": 0.0002923011779785156, "__label__home_hobbies": 9.518861770629884e-05, "__label__industrial": 0.0004825592041015625, "__label__literature": 0.00042629241943359375, "__label__politics": 0.0003390312194824219, "__label__religion": 0.0006241798400878906, "__label__science_tech": 0.03173828125, "__label__social_life": 0.00011801719665527344, "__label__software": 0.004131317138671875, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.0003597736358642578, "__label__transportation": 0.0006337165832519531, "__label__travel": 0.0002315044403076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39587, 0.01653]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39587, 0.50128]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39587, 0.87656]], "google_gemma-3-12b-it_contains_pii": [[0, 183, false], [183, 2959, null], [2959, 5386, null], [5386, 7583, null], [7583, 10347, null], [10347, 12955, null], [12955, 15570, null], [15570, 17965, null], [17965, 20211, null], [20211, 22035, null], [22035, 22922, null], [22922, 25303, null], [25303, 27447, null], [27447, 29497, null], [29497, 30750, null], [30750, 33827, null], [33827, 37170, null], [37170, 39587, null]], "google_gemma-3-12b-it_is_public_document": [[0, 183, true], [183, 2959, null], [2959, 5386, null], [5386, 7583, null], [7583, 10347, null], [10347, 12955, null], [12955, 15570, null], [15570, 17965, null], [17965, 20211, null], [20211, 22035, null], [22035, 22922, null], [22922, 25303, null], [25303, 27447, null], [27447, 29497, null], [29497, 30750, null], [30750, 33827, null], [33827, 37170, null], [37170, 39587, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39587, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39587, null]], "pdf_page_numbers": [[0, 183, 1], [183, 2959, 2], [2959, 5386, 3], [5386, 7583, 4], [7583, 10347, 5], [10347, 12955, 6], [12955, 15570, 7], [15570, 17965, 8], [17965, 20211, 9], [20211, 22035, 10], [22035, 22922, 11], [22922, 25303, 12], [25303, 27447, 13], [27447, 29497, 14], [29497, 30750, 15], [30750, 33827, 16], [33827, 37170, 17], [37170, 39587, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39587, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
a15bc03445674ea25d5b303520c0b85199cfa1f9
|
No Broadcast Abstraction Characterizes k-Set-Agreement in Message-Passing Systems (Extended Version)
Sylvain Gay, Achour Mostefaoui, Matthieu Perrin
To cite this version:
Sylvain Gay, Achour Mostefaoui, Matthieu Perrin. No Broadcast Abstraction Characterizes k-Set-Agreement in Message-Passing Systems (Extended Version). 2024. hal-04571653
HAL Id: hal-04571653
https://hal.science/hal-04571653
Preprint submitted on 8 May 2024
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
No Broadcast Abstraction Characterizes $k$-Set-Agreement in Message-Passing Systems (Extended Version)
Sylvain Gay*
École Normale Supérieure
sylvain.gay@ens.psl.eu
Achour Mostéfaoui
LS2N, Nantes Université
achour.mostefaoui@univ-nantes.fr
Matthieu Perrin
LS2N, Nantes Université
matthieu.perrin@univ-nantes.fr
Abstract
This paper explores the relationship between broadcast abstractions and the $k$-set agreement ($k$-SA) problem in crash-prone asynchronous distributed systems. It specifically investigates whether any broadcast abstraction is computationally equivalent to $k$-SA in message-passing systems.
A key contribution of the paper is the introduction of two new symmetry properties: compositionality and content-neutrality, inspired by the principle of network neutrality. Such clarity in definition is essential for this paper’s scope, as it aims not to characterize the computing power of a specific broadcast abstraction, but rather to demonstrate the nonexistence of a broadcast abstraction with certain characteristics. Hence, delineating the realm of “meaningful” broadcast abstractions becomes essential. The paper’s main contribution is the proof that no broadcast abstraction, which is both content-neutral and compositional, is computationally equivalent to $k$-set agreement when $1 < k < n$, in the crash-prone asynchronous message-passing model. To the best of our knowledge, this result represents the first instance of showing that a coordination problem cannot be expressed by an equivalent broadcast abstraction. It does not establish the absence of an implementation, but rather the absence of a specification that possesses certain properties.
Key-words: Agreement problem, Asynchronous system, Broadcast abstraction, Communication abstraction, Compositionality, Message-passing system, Network neutrality, Process crash, $k$-Set agreement, Wait-free model, Total order broadcast.
*This author was at LS2N, Nantes Université when this research was conducted.
1 Introduction
1.1 From Send/Receive to Communication Abstractions
This paper considers distributed systems consisting of a set of asynchronous processes prone to crash failures. These processes communicate by sending and receiving messages across an asynchronous network and must cooperate to achieve a common goal. What makes distributed computing challenging is that the dynamics of the underlying network on which the distributed application operates are beyond the programmer’s direct control. This necessitates treating the environment as a “hidden input” [23] and to “manage uncertainty” at runtime. To facilitate the design of advanced algorithms in this unpredictable setting, it is usual to define appropriate communication abstractions, that allow modularity and help mitigate uncertainty by restricting communication patterns that may occur at a higher abstraction level.
In crash-prone asynchronous distributed systems, a significant source of uncertainty stems from the divergent perceptions of the event set (i.e., message emissions and receptions) among different processes. Broadcast abstractions, which enable processes to transmit a message to all participants within the same operation, alleviate this issue by ensuring consistent and reliable communication across different nodes, thereby simplifying the complexity of managing individual send/receive operations and enhance fault tolerance by reducing the impact of node failures. Hence, message broadcasts (at least by correct processes) constitute a set of global events for which all correct processes eventually agree they took place, thereby underlining their significance in the architecture of reliable distributed computing systems.
Another source of uncertainty arises from the disparate order in which different participants may receive messages, leading to varied perceptions of the global state of the system. Several communication abstractions have been defined by enforcing properties on the message delivery order. FIFO and Causal Ordering are examples of such properties at the heart of FIFO-broadcast and Causal-broadcast [3, 24]. These abstractions facilitate the construction of distributed objects, like causal memory in asynchronous message-passing systems [2].
A remark on vocabulary Throughout this paper, to avoid confusion, we distinguish between the terms “send” and “receive”, which denote low-level point-to-point communication primitives applied to individual messages, and “broadcast” and “deliver”, which describe the higher-level operations of broadcast abstractions (one-to-all). Consequently, in the context of this paper, the terms “receive” and “deliver” are not used interchangeably or as quasi-synonyms.
1.2 Capturing Coordination Problems with Broadcasts
This paper follows the quest of identifying broadcast abstractions that characterize the major fundamental problems in distributed computing. Specifically, we aim to determine broadcast abstractions that are computationally equivalent to particular synchronization problems in a crash-prone asynchronous message-passing system. This equivalence means that the broadcast abstraction can resolve the synchronization problem regardless the
number of crash failures, and vice versa.
A well-known such characterization is the equivalence between *Total Order Broadcast* and the *consensus* problem. Consensus is a fundamental problem of distributed computing, that allows each process to propose a value, and ensures all correct processes decide on a common value. The defining properties of this problem are as follows: if a process invokes \(propose(v)\) and does not crash, it will decide on a value (termination); no two processes will decide on different values (agreement); and the decided value must have been proposed by a process (validity). One of the primary practical applications of consensus is to maintain consistency across replicated machines in a message-passing system. However, State Machine Replication (SMR) [26] typically builds on an intermediate communication abstraction, the well-known and powerful Total Order Broadcast abstraction [21]. This abstraction ensures that the order of message delivery is consistent across all processes.
The consensus problem is famously unsolvable in an asynchronous distributed system, even under the assumption that at most one process may crash [11]. The same holds for Total Order Broadcast. Indeed, both abstractions are computationally equivalent [7]. In a sense, Total Order Broadcast precisely “characterizes” the essence of the consensus problem. In a similar vein, *Mutual Broadcast* was recently proposed as a broadcast abstraction equivalent to read/write atomic registers [9]. Moreover, *Pair Broadcast* characterizes the computational power of both test-and-set and consensus between two processes [10]. Such capturing broadcast abstractions are instrumental for understanding the fundamentals of distributed computing problems by reducing their complexity into a logical property about the order in which different processes perceive events occurring in the system.
1.3 On the \(k\)-set Agreement Side
Specifically, this paper delves into characterizing the \(k\)-set agreement problem (\(k\)-SA), a generalization of consensus introduced by S. Chaudhuri in [8]. In \(k\)-SA, the agreement property is weakened as follows: processes are allowed to collectively decide up to \(k\) different values. Here, \(k\) represents the maximum disagreement in the number of different values that can be decided. The smallest value \(k = 1\) corresponds to consensus. As \(k\) increases, the problem becomes less constrained and may become easier to solve. However, it still embodies numerous complexities and challenges of distributed systems. It remains insoluble in a crash-prone asynchronous system when \(k < t\), where \(t\) is the maximum number of processes in the system that may crash [5, 14, 25].
The exploration of a broadcast abstraction that characterizes \(k\)-SA was initiated in a work dedicated to the shared-memory model, which proposed \(k\)-*Bounded Order Broadcast* (\(k\)-BO Broadcast in short) [15]. The \(k\)-BO Broadcast abstraction limits the disagreement on the message reception order among processes. Specifically, its ordering property asserts that every set of \(k + 1\) messages contains two messages delivered in the same order by all processes. In the special case where \(k = 1\), it boils down to Total Order Broadcast.
In crash-prone asynchronous systems where processes additionally have access to a shared memory composed of atomic read/write registers, \(k\)-BO Broadcast is computationally equivalent to \(k\)-set agreement. However, this equivalence in shared memory does not inherently translate to message-passing systems. Indeed, although \(k\)-BO broadcast
can be used to solve $k$-set agreement on its own, it remains unproven whether it can be implemented using solely $k$-set agreement objects and send/receive operations. While consensus is strong enough to emulate atomic registers, $k$-set agreement, for $k > 1$, is unable to emulate shared memory. Indeed, it has been proved that on one hand, $k$-SA and a problem called the $k$-simultaneous-agreement are equivalent in shared memory systems [1], and on the other hand, the $k$-simultaneous-agreement problem is harder than $k$-SA in message-passing systems where a shared memory emulation is not possible [6]. A corollary of this paper is that the implementation of $k$-BO broadcast on top of $k$-SA is not feasible in message-passing systems.
**Problem Statement** This paper investigates the following question: *Does there exist a broadcast abstraction computationally equivalent to $k$-SA in crash-prone asynchronous message-passing systems?*
**1.4 Contributions**
**Symmetric broadcast abstractions.** A simplistic approach to the discussed question might propose the following ordering property: “At most $k$ distinct messages can be delivered as the first messages by the processes.” Indeed, on the one hand, a $k$-SA object can select the set of messages eligible for initial delivery; and on the other hand, $k$-SA can be trivially solved by broadcasting all proposed values and deciding on the first delivered ones, hence establishing equivalence. However, such a solution is “unsatisfactory”, as an instance of this broadcast abstraction would only be effective for solving $k$-SA once, before the ordering property becomes meaningless. Hence, an iterative resolution of $k$-SA would necessitate a difference instance of the broadcast for each $k$-SA object to implement. This requirement contrasts with the traditional understanding of how processes interact with the communication layer in a message-passing system, where a broadcast abstraction serves as a system-wide service, shared among multiple algorithms for solving higher-level tasks. Each algorithm employs only a subset of the system’s messages.
Hence, before delving into our main problem statement, another important question needs to be clarified: *What constitutes a satisfactory solution?* A major contribution of this article is the introduction of two symmetry properties drawing inspiration from the principle of network neutrality: compositionality and content-neutrality. Compositionality ensures that a broadcast abstraction does not discriminate based on the application using it. This property is essential for constructing higher-level systems in a modular way, as a composition of independent components that share the same underlying broadcast abstraction. Content-neutrality ensures that the behavior of a broadcast abstraction does not depend on the content of the messages.
**An Inexistence Result.** Having defined what constitutes an appropriate broadcast abstraction, we are now equipped to address our problem statement, to which we provide a negative answer: we demonstrate that no broadcast abstraction, which is both content-neutral and compositional, is computationally equivalent to $k$-SA for $1 < k < n$.
To the best of our knowledge, this research presents the first instance where a coordination problem has been proven to lack an equivalent broadcast abstraction. This proof introduces an additional layer of abstraction compared to standard impossibility proofs.
Classical impossibility proofs typically demonstrate that a given specification cannot be implemented within a certain model. In contrast, our approach, which involves dealing with a specification that remains an unknown variable, presents new challenges. These challenges require more precise definitions of the computing model and the scheduler, along with a more careful analysis of arguments related to the expected behavior of the broadcast abstraction.
**Paper Organization.** The remainder of this paper is organized as follows. Section 2 delineates the crash-prone asynchronous message-passing distributed computing model pertinent to our results. Subsequently, Section 3 defines permissible broadcast abstractions, introducing the novel symmetry properties. Section 4 then establishes that no content-neutral and compositional broadcast abstraction is computationally equivalent to $k$-set agreement for $1 < k < n$. Finally, Section 5 concludes the paper.
## 2 Computing Model
The computing model is the classical asynchronous crash-prone message-passing model.
**Process Model.** The computing model consists of a set $\Pi$ of $n$ sequential processes denoted $p_1, \ldots, p_n$. Each process operates asynchronously, meaning it progresses at its own speed, which is arbitrary, unknown to other processes, and may vary through time. A process may halt prematurely (crash failure) but executes its local algorithm correctly until it possibly crashes. We do not assume any bound on the number of processes that may crash, hence $t = n - 1$. A process that crashes in a run is said to be *faulty*. Conversely, a process is called *correct* or *non-faulty* if it does not crash.
**Communication Model.** Communication between each pair of processes occurs through two uni-directional channels, one for each direction. Consequently, the network is complete: any process $p_s$ can directly send a message to any process $p_r$ (including itself). Each channel is reliable (free from loss, corruption, or message creation), not necessarily FIFO (First-In/First-Out), and asynchronous (messages have finite but unbounded transit times).
A process $p_s$ invokes the operation “*send m to p_r*” to send a message whose content is $m$ to $p_r$. The event “*receive m from p_s*” occurs at $p_r$ upon receiving a message whose content is $m$ from $p_s$. Although messages may share content, each sent message is unique. By a slight abuse of language, we say that “a process $p_i$ sends (resp. receives) a message $m$” when $p_i$ sends or receive a message whose content is $m$. The communication channels are governed by the following properties:
**SR-Validity.** If a process $p_r$ receives a message $m$ from $p_s$, then $p_s$ has indeed sent $m$ to $p_r$.
**SR-No-Duplication.** No process receives the same message more than once.
**SR-Termination.** If a process $p_s$ sends a message $m$ to a correct process $p_r$, then $p_r$ will eventually receive $m$ from $p_s$.
It is important to note that, due to asynchrony in processes and message delivery, no process can ascertain whether another process has crashed or is merely slow.
**Notation** The acronym $\text{CAMP}_n[\emptyset]$ denotes the described Crash-prone Asynchronous Message-Passing model without additional computational power. $\text{CAMP}_n[H]$ represents $\text{CAMP}_n[\emptyset]$ enhanced with the additional computational power denoted by $H$. For instance, $\text{CAMP}_n[k\text{-SA}]$ denotes the model $\text{CAMP}_n[\emptyset]$ in which processes have access to as many instances of the $k$-set agreement object as needed. Similarly, if $B$ represents a broadcast abstraction, then $\text{CAMP}_n[B]$ refers to the $\text{CAMP}_n[\emptyset]$ model in which processes can broadcast and deliver messages via the abstraction $B$.
**Execution** An execution $\alpha$ is a sequence of steps, each represented as a pair $\langle p_i : a \rangle$, where $p_i \in \Pi$ represents a process, and $a$ is an action occurring at $p_i$. These actions can be local computations, the invocation of primitives (such as message emissions), the triggering of local events (including message receptions), as well as invocations and responses of high-level operations as specified in the enriching hypothesis $H$. Examples of such high-level operations include proposing or deciding on a value in a $k$-SA object.
We define an execution $\alpha$ as being admitted by the model $\text{CAMP}_n[H]$ if it satisfies several criteria: it must adhere to the three properties of the communication channels, namely SR-Validity, SR-No-Duplication, and SR-Termination; it must conform to all properties specified by $H$ and the high-level abstractions it provides; and it must be well-formed with respect to the algorithm it executes, as delineated by the following definition.
**Definition 1 (Well-Formed Executions).** Consider $A$, an algorithm that implements a high-level abstraction $A$ within the $\text{CAMP}_n[H]$ model. An execution is deemed well-formed with respect to $A$ if it fulfills the following conditions:
- Only processes labeled from $p_1$ to $p_n$ take actions in $\alpha$;
- A process only invokes an operation of $A$ after having returned from its previous invocations;
- The actions undertaken by any process between the invocation of an operation on $A$ and its corresponding response (if one exists), excluding local events (such as message receptions and deliveries), must align with the actions specified by $A$.
3 Defining Admissible Broadcast Abstractions
3.1 Interface of broadcast abstractions
A broadcast abstraction denoted as $B$, enables processes to broadcast messages that are guaranteed to be delivered at least to all correct processes. Consequently, all broadcast abstractions share the same interface, comprising a single operation named broadcast and an event called deliver.
A process $p_i$ invokes the operation “$B.\text{broadcast}(m)$” to utilize $B$ for broadcasting a message whose content is $m$. This is referred to as $p_i$ $B$-broadcasting a message whose
content is $m$. Subsequently, the event \( B\text{.deliver }m\text{ from }p_i \) might be triggered at some processes $p_j$, leading us to say that $p_j$ $B$-delivers a message $m$ from $p_i$. Analogous to the send/receive interface, it is assumed that each broadcast message is unique, regardless of having identical content. However, for the sake of conciseness, we amalgamate a message and its content whenever the distinction is immaterial. The set of all messages that can be broadcast during an execution is denoted by $M$. The following properties must be verified by all broadcast abstractions.
**BC-Validity.** If a process $p_i$ $B$-delivers a message $m$ from $p_j$, then it is guaranteed that $p_j$ has previously $B$-broadcast $m$.
**BC-No-Duplication.** A process will not $B$-deliver the same message more than once.
**BC-Local-Termination.** If a correct process invokes $B$.broadcast($m$), it will eventually return from this invocation.
**BC-Global-CS-Termination.** If a correct process $B$-broadcasts a message $m$, then all correct processes will eventually $B$-deliver $m$.
The first two properties mentioned are classical safety properties and share the same definitions as their send/receive counterparts. The third property is a classical liveness property. It is important to note that the BC-GLOBAL-CS-TERMINATION property only applies to correct processes. (The abbreviation “CS”, standing for correct sender, emphasizes that this property is contingent on the sender’s correctness.) Consequently, if a process $p_i$ crashes during its execution of broadcast($m$), it is permissible for some processes to deliver $m$ while others do not, unless otherwise specified. This specification choice is intentionally made to allow for flexible definitions of liveness properties in broadcast abstractions.
In particular, the most basic broadcast abstraction that can be defined, only verifies the four properties defined above. In the $\mathcal{CAP}_n[\emptyset]$ model, its implementation involves simply sending messages to all participants. For this reason, it is commonly referred to as Send-To-All Broadcast.
**Remark on Expressiveness** Set-Constrained-Delivery Broadcast (SCD Broadcast) [16] and its extension $k$-SCD Broadcast [15] are two examples of broadcast abstractions whose specification slightly deviate from the propose interface. Indeed, these abstractions deliver messages not individually, but within unordered sets of messages, hence the designation. While it is easy to generalize the definitions and the proofs to accommodate this particularity, doing so would compromise readability. For the sake of maintaining clarity, we have chosen not to pursue this generalization.
**A local ordering property** When considered together, the BC-VALIDITY and BC-GLOBAL-CS-TERMINATION properties ensure that a step $\langle p_i : B$.broadcast($m$) $\rangle$ executed by a correct process $p_i$ is always followed by a step $\langle p_i : B$.deliver $m$ from $p_i$ $\rangle$. In a similar vein, the BC-LOCAL-TERMINATION property guarantees that the $B$-broadcasting step is consistently succeeded by $\langle p_i : \text{return from } B$.broadcast($m$) $\rangle$. However, there is no inherent order between the delivery of its own message $m$ by $p_i$, and $p_i$ returning.
from its $B$.broadcast invocation. Once again, this specification choice is deliberately made to accommodate flexible definitions of broadcast abstractions. For instance, certain abstractions may require that $B$.broadcast returns immediately, or they may wait until the broadcast message has been delivered, while others may delegate the decision to the implementation. Nevertheless, it is occasionally beneficial to reason based on a fixed total order among the three events. Adopting the terminology suggested in [9], we augment all broadcast abstractions with a trait $B$.sync-broadcast($m$), defined as: $B$.broadcast($m$); wait($m$ has been $B$-delivered locally). For every message $m$ $B$-broadcast by each correct process $p_i$, the following three steps occur sequentially: $\langle p_i : B$.sync-broadcast($m$) $\rangle$, followed by $\langle p_i : B$.deliver $m$ from $p_i$ $\rangle$, and then $\langle p_i : \text{return from } B$.sync-broadcast($m$) $\rangle$.
3.2 Symmetry Properties of Broadcast Abstractions
Broadcast abstractions can be characterized by additional predicates on the set of executions they admit. Typically, these predicates fall into two categories. On one hand, liveness predicates ensure message delivery in scenarios not covered by Send-To-All Broadcast. Examples of this include the definitions of Reliable Broadcast and Uniform Reliable Broadcast [13]. On the other hand, safety predicates concern the relative order in which processes deliver messages. Examples in this category are FIFO Broadcast, Causal Broadcast [3, 24], Mutual Broadcast [9], Pair Broadcast [10], $k$-Bounded Order Broadcast [15], and Total Order Broadcast [21].
As highlighted in the Introduction, not all predicates are equally appropriate for the design of a broadcast abstraction. In this section, we introduce two novel symmetry properties inspired by the broader principle of “network neutrality”. Network neutrality advocates, among other tenets, that network services should not discriminate based on the content, sender, or usage of the messages they transmit. While concerns regarding network neutrality often arise in discussions about non-functional aspects of message routing, they hold significant relevance for the functional design of broadcast abstractions. Within this framework, we interpret network neutrality to include two essential symmetry properties: Compositionality and Content Neutrality. These properties assert that the broadcast abstraction should impartially treat all messages, irrespective of their usage or content.
**Compositionality.** Building upon earlier concepts, one might propose characterizing iterated $k$-SA using an iterated version of the broadcast described in the Introduction. This approach, denoted by $k$-Stepped Broadcast, would be characterized by the following ordering property: “for each $a$, define $S_a$ as the set containing the $a^{th}$ message broadcast by each process; then there are at most $k$ messages $m \in S_a$ such that some process delivers $m$ before any other message in $S_a$.” Now, the ordering of messages within each $S_a$ set could determine the set of values decided on a sequence of $k$-SA objects, and conversely, thereby establishing equivalence.
However, since the ordering property only governs specific sets of messages, it imposes an overly precise communication pattern (lock-step pattern), severely limiting its utility for constructing modular higher-level systems. Indeed, a broadcast abstraction typically serves as a system-wide abstraction, manifesting as a single service that is shared among
multiple algorithms for solving higher-level tasks. Consider, for instance, a system that integrates two applications built upon the same service that provides this broadcast abstraction: the iterated $k$-SA algorithm described above and a messaging service utilizing only the Reliable Broadcast capabilities of $k$-Stepped Broadcast. Each application employs only a distinct subset of the system’s messages, and the messages used by the messaging service interfere with the communication pattern followed by the $k$-SA algorithm. Unless a shared global counter is used to track the number $a$ of broadcast messages, the applications cannot fully benefit from the offered ordering property. This limitation hinders their independent design and composition.
Compositionality is the property required for the implementation of composable algorithms or applications on top of a broadcast abstraction. Each higher-level construction uses only a subset of the messages broadcast at the lower level. Compositionality ensures that each of these message sets maintains the same ordering properties as those of the entire message set. This is achieved by requiring that the restriction of an admissible execution to any subset of its messages remains an admissible execution.
**Definition 2 (Compositionality).** A broadcast abstraction $B$ is compositional if, for all executions $\alpha$ admissible by $B$, and for any set of messages $M$, the restriction of $\alpha$ onto the messages of $M$ is also admissible by $B$.
To exemplify the **Compositionality** property, let us demonstrate that $k$-BO Broadcast is compositional. Indeed, its ordering property is defined by a predicate $P$ that must be satisfied by any set $S$ of messages. Specifically, $P(S)$ stipulates that if $S$ contains at least $k + 1$ messages, then at least two of these messages must be delivered in the same order by all processes. Consider an execution $\alpha$ admissible by $k$-BO Broadcast, with its set of sent messages denoted as $M_\alpha$. For any subset $M \subseteq M_\alpha$ of these messages, every subset $S$ of $M$ is also a subset of $M_\alpha$, ensuring $P(S)$ is satisfied, which is the condition imposed by compositionality. This logical framework can be applied to all broadcast abstractions defined by a predicate on the relative order of emission and delivery events, independent of the context of the complete execution. Notably, this encompasses all broadcast abstractions mentioned in the Introduction and, to the best of our knowledge, all broadcast abstractions currently described in the literature.
Conversely, the limitations of compositionality can be highlighted by revisiting our initial counter-example involving $k$-Stepped Broadcast. Consider an execution $\alpha$ where two processes, $p_0$ and $p_1$, engage in the 1-Stepped-broadcasting of two messages each: $m_i$ and $m'_i$. In $\alpha$, $p_0$ delivers the messages $[m_0, m'_0, m_1, m'_1]$ in this order. Simultaneously, $p_1$ delivers the sequence $[m_0, m_1, m'_0, m'_1]$. Although both processes deliver $m_0$ before $m_1$ and $m'_0$ before $m'_1$, conforming to the 1-stepped predicate, the execution’s restriction to the subset $\{m'_0, m_1\}$ fails to maintain this order. This issue arises because the definition relies on the sequence number $a$ of the broadcast messages, which is only contextually relevant within the full scope of the execution and varies when subsets of messages are considered.
**Content Neutrality.** The second property asserts that the defining predicates of a broadcast abstraction should be applicable based solely on the occurrence of broadcast and delivery events during an execution, independent of the message’s content. Hence, if some messages get substituted by other within an execution, it should not hinder the
admissibility of the execution. Content neutrality then stipulates that an admissible execution must remain admissible even when some of its messages are replaced.
**Definition 3 (Content-Neutrality).** A broadcast abstraction $B$ is content-neutral if, for all executions $\alpha$ admissible by $B$, and all injective functions $r$ on the set of messages, the execution obtained by replacing all messages $m$ by $r(m)$ in $\alpha$, is also admissible by $B$.
It is important to note that while all broadcast abstractions mentioned in the Introduction adhere to the CONTENT-NEUTRALITY property, this is not necessarily true for all broadcast abstractions found in the literature. For instance, Generic Broadcast [20] supposes that the messages it transmits encapsulate a command, i.e., an operation invocation on a replicated data structure implemented using the broadcast. In the vein of Generalized Paxos [18], processes only need to agree on a common delivery order for pairs of non-commuting commands, as executing commuting commands in different orders does not compromise the consistency of the implemented data structure. However, specifying such a broadcast necessitates differentiating between messages, which violates content neutrality.
Returning to the present paper, it would be straightforward to propose a broadcast abstraction equivalent to $k$-set agreement that is not content-neutral. For example, one could enforce an ordering property that only applies to messages of a special type $\text{sa}(ksa, v)$, where $ksa$ uniquely identifies a $k$-SA object and $v$ is a value proposed to $ksa$. This would require that, for each $ksa$, at most $k$ distinct messages of the form $\text{sa}(ksa, \cdot)$ are delivered first by any process. However, such an approach would not be conducive to understanding the essence of $k$-set agreement. In the following section, we focus exclusively on content-neutral broadcast abstractions.
### 4 On Capturing $k$-Set Agreement
In this section, we establish that no broadcast abstraction, which is both content-neutral and compositional, can be equivalent to $k$-set agreement in the model $\mathcal{CAMP}_n[\emptyset]$ when $1 < k < n$. It is evident that for $k = 1$, boils down to consensus, which is characterized by Total Order broadcast; conversely, for $k = n$, $n$-set agreement can be trivially solved without any communication, rendering it equivalent to Send-To-All Broadcast.
We begin by recalling the definition of $k$-set agreement in Section 4.1. The ensuing proof is structured as a *reductio ad absurdum*. We hypothesize the existence of a broadcast abstraction $B$ satisfying the aforementioned conditions. Two deterministic reduction algorithms are then considered: $\mathcal{A}$, which implements $k$-set agreement in the model $\mathcal{CAMP}_n[B]$, and $\mathcal{B}$, which implements $B$ in the model $\mathcal{CAMP}_n[k\text{-SA}]$. For any $N \in \mathbb{N}$, Section 4.2 constructs an execution $\alpha_{k,N,B,B}$ (as defined in Definition 4 and illustrated on Figure 1) of $\mathcal{B}$, wherein each process $B$-delivers $N$ of its own messages before any messages from other processes. Subsequently, in Section 4.3, we demonstrate that sufficiently large values of $N$ inhibit $\mathcal{A}$ from effectively resolving $k$-set agreement, thereby leading to a contradiction.
Figure 1: Illustration of the adversarial execution $\alpha_{k,N,B,B}$ for $k = 3$ and $N = 2$, extending up to Line 7 of Algorithm 1. Within the $\text{CAMP}_{k+1}[k\text{-SA}]$ model, plain arrows signify sent and received messages, while white squares denote propositions on $k\text{-SA}$ objects, with their respective decided values indicated above. In the context of the $\text{CAMP}_{k+1}[B]$ model, simulated by Algorithm $B$, dotted arrows represent $B$-broadcast and $B$-delivered messages. Notably, the final $N$ messages of each process, enclosed in grey boxes, are incompatible with an implementation of $k$-set agreement.
4.1 Definition of $k$-Set Agreement
$k$-Set agreement, first introduced by S. Chaudhuri in [8] (refer to [22] for a comprehensive survey of $k$-set agreement in various contexts), was conceptualized to analyze the relationship between the maximum number of allowable process failures ($t$) and the feasible degree of agreement ($k$) among processes. Here, a lower $k$ value signifies a higher degree of agreement, with the ultimate agreement being $k = 1$, which corresponds to consensus.
The $k$-Set agreement problem (abbreviated as $k$-SA) is a one-shot agreement problem that equips processes with a singular operation, denoted $\text{propose}()$. When a process $p_i$ invokes $\text{ksa}.\text{propose}(v_i)$ on a $k$-SA object $\text{ksa}$, it is said to “propose the value $v_i$ to $\text{ksa}$”. This operation yields a return value $v$, at which point the invoking process is described as “deciding $v$ on $\text{ksa}$”, and “$v$ becomes a decided value”. In other words, the steps $\langle p_i : \text{return } w \text{ from } \text{ksa}.\text{propose}(v) \rangle$ and $\langle p_i : \text{ksa.decide}(w) \rangle$ are interpreted as synonymous. It is a standard assumption that each process is limited to a single invocation of $\text{propose}()$ on any given $k$-SA object, ensuring the problem’s one-shot nature.
$k$-Set agreement is defined by the following properties.
$k$-SA-Validity. If a process decides a value $v$, then $v$ was proposed by some process.
$k$-SA-Agreement. No more than $k$ distinct values are decided upon by the processes.
$k$-SA-Termination. Every non-faulty process that invokes $\text{propose}()$ eventually decides.
4.2 Definition of the adversarial scheduler
For brevity in this subsection, we pose $k > 1$ and $N > 0$. Additionally, we postulate the existence of an algorithm $B$ that implements a certain broadcast abstraction $B$ within the model $\text{CAMP}_{k+1}[k\text{-SA}]$. The argument is then generalized to the case where $n > k + 1$ in the proof of the main theorem. This is achieved by observing that processes $p_j$, for $j > k + 1$, may fail at the beginning of the execution. The adversarial execution $\alpha_{k,N,B,B}$ is constructed by an adversarial scheduler that follows the procedure outlined in
Algorithm 1: Adversarial scheduler used by Definition 4
The algorithm begins with a sequential execution of all processes, ranging from $p_1$ to $p_{k+1}$. During this phase, each process $p_i$ repetitively calls $B\text{.sync-broadcast}$(SYNCH) until it has $B$-delivered $N$ of its own messages. This part of the execution remains indistinguishable to $p_i$ from an execution $\gamma_{k,N,B,B,i}$, where other processes $p_j$ would have crashed before the local delivery of their own $N$ messages. To achieve this, processes decide on their own value on $k$-SA objects whenever possible, and the transmission of their messages to other processes is deferred by the scheduler until the end of this phase. However, a complication arises when all processes propose a value on the same $k$-SA object $ksa$. In such scenarios, $p_{k+1}$ is compelled to decide on the value proposed by $p_k$ to maintain the $k$-SA\text{-AGREEMENT} property. This decision renders $p_{k+1}$’s execution
distinguishable from a scenario where $p_k$ had initially crashed, allowing $p_{k+1}$ to await $p_k$’s message. As a result, all messages sent by $p_k$ to $p_{k+1}$ are received by $p_{k+1}$ (lines 22-24), and the messages that $p_k$ $B$-broadcast before this juncture are excluded from its count of $N$ messages.
Subsequently, in a later phase of the algorithm, all processes receive all messages that were sent to them in the initial stage but have yet to be received, as delineated in Line 26. Algorithm 1 concludes by returning the execution halted at this juncture. Notably, at this point of termination, not all messages that have been $B$-broadcast are necessarily $B$-delivered by every process. However, this does not pose a problem for our analysis: the counterexample required for the proof in the following section involves a safety property that is already violated in the execution prefix returned by the algorithm. The scheduler maintains the following main variables:
- $\alpha$, which is initially an empty sequence $\varepsilon$, is the execution currently being constructed.
- $i$, which stores the identifier of the process currently under execution.
- $sent$, initially set to $\emptyset$, is a set of triplets. A triplet $\langle m, i, j \rangle$ is included in $sent$ when a message $m$ has been sent by process $p_i$ to process $p_j$, but has not yet been received by $p_j$.
- $decided[ksa][j]$ is a two-dimensional associative array. The keys $ksa$ correspond to $k$-SA objects used in $B$, and $j$ represents process identifiers. The values are either potential values that can be proposed to $k$-SA objects in $B$, or a special value $\bot$ that cannot be proposed. For each $ksa$ and $j$, $decided[ksa][j]$ is initially set to $\bot$. It is later updated to value $w$ when the process $p_j$ decides on $w$ for $ksa$.
- $local_{del}$ tracks the number of messages that process $p_i$ $B$-delivers from itself, while avoiding communication with other processes. Under normal conditions, $local_{del}$ cycles through values from 0 to $N - 1$ for each process $p_i$. However, if communication between processes $p_k$ and $p_{k+1}$ is inevitable during the execution of a $B.sync$-$broadcast(m)$ operation by $p_k$, $local_{del}$ is assigned a value of $-1$. This assignment signifies that $local_{del}$ will be reset to 0 once $p_k$ completes its $B$-broadcast operation. Consequently, this setup enables $p_k$ to $B$-deliver $N$ of its own messages (excluding $m$) without engaging in communication.
- $step$ identifies the subsequent step to be executed by Process $p_i$, represented either by the pair $\langle p_i, action \rangle$ or by the special value $\bot$ if the step is yet to be determined. In this context, there are two primary scenarios to consider. Firstly, if $p_i$ has initiated the $B.$sync-$broadcast$ operation but has not yet completed this invocation, then the deterministic algorithm $B$ is responsible for defining the subsequent step that $p_i$ must execute (Line 8). This step is crucial to fulfilling the BC-LOCAL-TERMINATION property of $B$ within the configuration $C(\alpha)$, which delineates its local state after the execution $\alpha$. In the second scenario, if the aforementioned condition does not hold, $p_i$ proceeds to $B$-broadcast a new message, specifically SYNCH.
Definition 4 now outlines the adversarial executions $\alpha_{k,N,B,B}$, $\beta_{k,N,B,B}$, and $\gamma_{k,N,B,B,i}$. Our subsequent objective is to demonstrate that $\alpha_{k,N,B,B}$ qualifies as an admissible execution of the $\mathcal{CAMP}_{k+1}[k-SA]$ model. It is required to verify that $\alpha_{k,N,B,B}$ is well-formed (as per Lemma 6), upholds the three defining properties of the $k$-set agreement: $k$-SA-VALIDITY (Lemma 1), $k$-SA-AGREEMENT (Lemma 2), and $k$-SA-TERMINATION (Lemma 3). and ensures compliance with the three properties of send/receive communication: SR-VALIDITY (Lemma 4), SR-NO-DUPLICATION (Lemma 5), and SR-TERMINATION (Lemma 8).
**Definition 4 (Adversarial execution).** The following executions are defined:
- $\alpha_{k,N,B,B}$ is the execution produced by the procedure
`adversarial_scheduler(k, N, B, B)`, as delineated in Algorithm 1.
- $\beta_{k,N,B,B}$ constitutes a subset of $\alpha_{k,N,B,B}$, encompassing only those steps that involve events associated with $B$. This includes the invocations of, or the responses from, the $B$-broadcast operation, as well as any $B$-delivery event.
- For each $i \in 1, ..., k+1$, $\gamma_{k,N,B,B,i}$ is derived from $\alpha_{k,N,B,B}$ by limiting it to, on the one hand, the steps of process $p_i$ occurring strictly before Line 26; and on the other hand, the steps performed by $p_k$ that are succeeded by a reset of local_del on Line 25.
In these executions, all processes $p_j \notin \{p_i, p_k\}$ are assumed to have crashed initially. Furthermore, $p_k$ is treated as having crashed before executing its first step in $\alpha_{k,N,B,B}$ that is absent in $\gamma_{k,N,B,B,i}$, should such a step be present.
**Lemma 1 (k-SA-Validity).** In the executions $\alpha_{k,N,B,B}$ and $\gamma_{k,N,B,B,i}$, if a process decides on a value $w$ on a $k$-SA object $ksa$, then the value $w$ was proposed by some process on $ksa$.
*Proof.* Assume that $\alpha_{k,N,B,B}$ includes a step $\langle p_j : ksa.decide(w) \rangle$. This step originates from Line 20, following $p_j$’s invocation of $ksa.propose(v)$. Consequently, $w = decided[ksa][j]$, that was set either on Line 18 or Line 19.
- If $decided[ksa][j]$ was assigned on Line 19, then $w = v$. The step $\langle p_j : ksa.propose(v) \rangle$ would have been included in $\alpha_{k,N,B,B}$ at Line 9.
- Otherwise, $w = decided[ksa][k]$, and per Line 17, $i = k + 1$. In this case, $decided[ksa][k] \neq \perp$ was previously set by $p_k$ in $\alpha_{k,N,B,B}$ on Line 19, following the inclusion of the step $\langle p_k : ksa.propose(w) \rangle$ in $\alpha_{k,N,B,B}$.
This sequence of events establishes the property for $\alpha_{k,N,B,B}$. Consider now the case of $\gamma_{k,N,B,B,i}$ containing a step $\langle p_j : ksa.decide(w) \rangle$, following the same case disjunction as before. In the case of Line 19, the property holds because $\gamma_{k,N,B,B,i}$ and $\alpha_{k,N,B,B}$ both encompass identical propose and decide steps executed by $p_j$. In the second case, the fulfillment of the condition at Line 21 for $p_k$ leads to the subsequent reset of local_del on Line 25. Therefore, in both cases, the step $\langle p_k : ksa.propose(w) \rangle$ is also included in $\gamma_{k,N,B,B,i}$.
**Lemma 2 (k-SA-Agreement).** In both $\alpha_{k,N,B,B}$ and $\gamma_{k,N,B,B,i}$ executions, no more than $k$ distinct values are decided on any given $k$-SA object.
Proof. By the definition of \( \gamma_{k,N,B,i} \), at most two processes, specifically \( p_i \) and \( p_k \), are capable of deciding a value in \( \gamma_{k,N,B,i} \), satisfying the condition as \( 2 \leq k \).
Assume that in \( \alpha_{k,N,B,B} \), \( k + 1 \) distinct values are decided. Given that processes execute sequentially, processes \( p_1 \) through \( p_k \) would have already recorded their value in \( \text{decided[ksa]}[: ] \) before \( p_{k+1} \) proposing its value. Consequently, the condition at Line 17 would be met, leading to \( p_{k+1} \) deciding the same value as \( p_k \), thus resulting in a contradiction. \( \square \)
Lemma 3 (k-SA-Termination). In the executions \( \alpha_{k,N,B,B} \) and \( \gamma_{k,N,B,B,i} \), if a process proposes a value on a k-SA object ksa, then this process will also decide a value on ksa.
Proof. Suppose that \( \alpha_{k,N,B,B} \) includes a step \( (p_j : \text{ksa.propose}(v)) \). This step was introduced on Line 9. Subsequently, the condition at Line 16 is satisfied, leading to the inclusion of a step \( (p_j : \text{ksa.decide}(w)) \) in \( \alpha_{k,N,B,B} \) at Line 20. This confirms the lemma for \( \alpha_{k,N,B,B} \).
Now, assume \( \gamma_{k,N,B,B,i} \) contains a step \( (p_j : \text{ksa.propose}(v)) \). Here, \( j \) can only be either \( i \) or \( k \).
- If \( j = i \), then \( \gamma_{k,N,B,B,i} \) includes the same step \( (p_j : \text{ksa.decide}(w)) \) as found in \( \alpha_{k,N,B,B} \).
- If \( j = k \), it is important to note that the steps \( (p_j : \text{ksa.propose}(v)) \) (at Line 9) and \( (p_j : \text{ksa.decide}(w)) \) (at Line 20) cannot be isolated by a reset of local_del on Line 25. Therefore, if the proposal step exists in \( \gamma_{k,N,B,B,i} \), the decision step must also be present.
In both scenarios, the lemma’s condition is satisfied in \( \gamma_{k,N,B,B,i} \), thus completing the proof. \( \square \)
Lemma 4 (SR-Validity). In the executions \( \alpha_{k,N,B,B} \) and \( \gamma_{k,N,B,B,i} \), if a process \( p_r \) receives a message \( m \) from process \( p_s \), then process \( p_s \) has indeed sent \( m \) to \( p_r \).
Proof. Assume that \( \alpha_{k,N,B,B} \) includes a step \( (p_r : \text{receive } m \text{ from } p_s) \). This step is either introduced on Line 11 following a step \( (p_s : \text{send } m \text{ to } p_r) \) where \( r = s \), or on Line 23 or Line 26 when \( (m,s,r) \in \text{sent} \). The triplet \( (m,s,r) \) is added to \( \text{sent} \) only on Line 13, implying that \( (p_s : \text{send } m \text{ to } p_r) \) was previously included in \( \alpha_{k,N,B,B} \) on Line 9. This confirms the lemma for \( \alpha_{k,N,B,B} \).
Now, consider a reception step in \( \gamma_{k,N,B,B,i} \). Given the previous argument, \( \alpha_{k,N,B,B} \) must contain a corresponding emission step. Since reception steps from Line 26 are not part of \( \gamma_{k,N,B,B,i} \), there are two possible scenarios:
- If the reception step is added to \( \gamma_{k,N,B,B,i} \) on Line 11, then the preceding emission step is also included in \( \gamma_{k,N,B,B,i} \).
- If the reception step is added to \( \gamma_{k,N,B,B,i} \) on Line 23, the sender is \( p_k \), and local_del was reset on Line 25 subsequently. Therefore, the emission step is also present in \( \gamma_{k,N,B,B,i} \).
Both cases confirm the lemma’s condition on \( \gamma_{k,N,B,B,i} \), thus completing the proof. \( \square \)
Lemma 5 (SR-No-Duplication). In both $\alpha_{k,N,B}$ and $\gamma_{k,N,B,B,i}$ executions, each message is received at most once.
Proof. The property for $\alpha_{k,N,B,B}$ is substantiated by the message reception mechanics: a message can only be received on Line 11, in which case it is not added to sent so it is not received again later, on Line 23 followed by its removal from sent, or singularly on Line 26 due to sent’s set semantics. Since $\gamma_{k,N,B,B,i}$ comprises only a subset of $\alpha_{k,N,B,B}$’s reception events, the lemma is valid for $\gamma_{k,N,B,B,i}$ as well. □
Lemma 6 (Well-Formed Executions). $\alpha_{k,N,B,B}$ and $\gamma_{k,N,B,B,i}$ are well-formed executions of $\text{CAMP}_{k+1}[H]$ with respect to $B$.
Proof. To validate the property for $\alpha_{k,N,B,B}$, we observe that the participation of only processes $p_1$ to $p_{k+1}$ stems for (1) loop bounds defined on Line 3, and (2) the SR-Validity property and the correctness of $B$ for the receiving processes on Line 26. A process initiates the operation $B.broadcast$ either at the start of its execution on Line 4, or immediately after returning from its previous invocation, as indicated on Lines 6 and 7. This ensures adherence to the required pattern of alternating invocations and responses. Furthermore, the sequence of steps a process follows between its invocations and responses is consistent with $B$, as defined on Line 8.
As for $\gamma_{k,N,B,B,i}$, the property comes from the fact that for all processes $p_j$, the sequence of steps taken by $p_j$ in $\gamma_{k,N,B,B,i}$ is a prefix of the sequence of steps taken by $p_j$ in $\alpha_{k,N,B,B}$. □
Lemma 7 (Termination of Algorithm 1). The execution $\alpha_{k,N,B,B}$ is finite.
Proof. Assume for contradiction that $\alpha_{k,N,B,B}$ contains an infinite number of steps. Given that Algorithm 1 includes no recursion and only one while loop, there exists some $i \in \{1, ..., k+1\}$ engaged in an infinite loop starting at Line 5 with local_del $< N$ remaining true indefinitely.
By Lemmas 1-5, $\gamma_{k,N,B,B,i}$ satisfies all the conditions required for an admissible execution, except SR-Termination. Let us establish that $\gamma_{k,N,B,B,i}$ also verifies SR-Termination:
- For $i < k$, $\gamma_{k,N,B,B,i}$ contains only messages sent by $p_i$ as the $i^{th}$ iteration does not terminate. Process $p_i$ receives its own messages on Line 11, and others are not required to receive them as they have crashed.
- For $i = k$, similar to the previous case, $\gamma_{k,N,B,B,i}$ includes only messages by $p_i$ by definition of $\gamma_{k,N,B,B,i}$. Message reception follows the same logic as above.
- For $i = k + 1$, note that $p_k$ is considered faulty in $\gamma_{k,N,B,B,i}$ due to (1) taking a finite number of steps in $\alpha_{k,N,B,B}$ since $p_i$ is executed after $p_k$’s last step, and (2) the condition local_del $< N$ only becoming false post Line 15 which is preceded by a step $\langle p_k : B.deliver m \text{ from } p_k \rangle$ that belongs to $\alpha_{k,N,B,B}$ but not $\gamma_{k,N,B,B,i}$. Therefore, suffices to show that $p_i$ receives all messages directed to it. Only $p_k$ and $p_i$ send messages in $\gamma_{k,N,B,B,i}$. Process $p_i$ receives its own messages on Line 11, and all messages sent by $p_k$ to $p_i$ in $\gamma_{k,N,B,B,i}$ are sent prior to the reset of local_del, hence they are received by $p_i$ on Line 23.
Therefore, $\gamma_{k,N,B,i}$ is an execution admitted by the model $\text{CAMP}_{k+1}[k\text{-SA}]$, in which $p_i$ takes an infinite number of steps. By correctness of $B$ and the BC-GLOBAL-CS-TERMINATION property of $B$, all $B$-broadcast messages by $p_i$ in $\gamma_{k,N,B,i}$ must eventually be $B$-delivered by $p_i$ in $\gamma_{k,N,B,i}$. Moreover, since $\gamma_{k,N,B,i}$ and $\alpha_{k,N,B,B}$ contain the same steps of $p_i$, all messages $B$-broadcast by $p_i$ in $\alpha_{k,N,B,B}$ are eventually $B$-delivered by $p_i$ in $\alpha_{k,N,B,B}$. Since $p_i$ immediately $B$-broadcasts a new message after returning from its previous $B$.sync-broadcast invocation (Lines 6-7), $p_i$ $B$-delivers an infinite number of messages from itself, and repeatedly increments local_del on Line 15. As local_del is bounded by $N$, it must be reset on Line 25 infinitely, following proposals to $k$-SA objects.
Let $K$ be the set of $k$-SA objects such that $p_i$ executes Line 25 after proposing a value to them. Given the one-time proposal limit per $k$-SA object, $K$ is infinite. Based on Line 21, $i = k$, and $\text{decided}[ksa][1] \neq \bot$ for all $ksa \in K$. However, $\text{decided}[ksa][1]$ is set during the first iteration for an infinite number of distinct $k$-SA objects. This indicates that the first iteration does not terminate. This is a contradiction because (1) $k > 1$ so $k \neq 1$ and (2) the $k^{th}$ iteration of the loop started because $p_k$ takes (an infinite number of) steps in $\alpha_{k,N,B,B}$. This contradiction implies that $\alpha_{k,N,B,B}$ must be finite, completing the proof.
Lemma 8 (SR-Termination). In $\alpha_{k,N,B,B}$, if a process $p_s$ sends a message $m$ to a correct process $p_r$, then $p_r$ will eventually receive $m$ from $p_s$.
Proof. Consider a message $m$ sent by $p_s$ to $p_r$ in $\alpha_{k,N,B,B}$. A step $\langle p_s : \text{send } m \text{ to } p_r \rangle$ is recorded in $\alpha_{k,N,B,B}$ at Line 9. If $s = r$, then a step $\langle p_r : \text{receive } m \text{ from } p_s \rangle$ is subsequently appended to $\alpha_{k,N,B,B}$ at Line 11. In contrast, if $s \neq r$, $\langle m, s, r \rangle$ is added to sent at Line 13. As established in Lemma 7, $\alpha_{k,N,B,B}$ is finite. If $\langle m, s, r \rangle$ remains in sent at the conclusion of the execution, then a step $\langle p_r : \text{receive } m \text{ from } p_s \rangle$ is appended to $\alpha_{k,N,B,B}$ at Line 26. Conversely, if $\langle m, s, r \rangle$ is not present in sent, it implies that it was removed at Line 24 subsequent to appending a step $\langle p_r : \text{receive } m \text{ from } p_s \rangle$ to $\alpha_{k,N,B,B}$ at Line 23. Therefore, in every case, $p_r$ receives $m$ from $p_s$.
4.3 $N$-Solo Executions and the Contradiction
Definition 5 ($N$-solo executions). Let $\beta$ be an execution of the model $\text{CAMP}_n[B]$, and let $N \in \mathbb{N}$. We say that $\beta$ is $N$-solo if, for each process $p_i$, there exist $N$ messages $m_{i,1}, \ldots, m_{i,N}$ $B$-broadcast by $p_i$ such that, in $\beta$, for all pairs of distinct processes $p_i$ and $p_j$, $p_i$ $B$-delivers all its own messages $m_{i,1}$ before $B$-delivering any of $p_j$’s messages $m_{j,i}$.
Lemma 9. For all $k > 1$, and for every content-neutral and compositional broadcast abstraction $B$, if there exists an algorithm $\mathcal{A}$ that solves $k$-SA in the model $\text{CAMP}_{k+1}[B]$, then there exists an integer $N > 0$ such that $B$ does not allow any $N$-solo execution.
Proof. Assume $B$ is a broadcast abstraction and $\mathcal{A}$ is an algorithm solving $k$-SA in the model $\text{CAMP}_{k+1}[B]$. It’s noteworthy that $\mathcal{A}$ can be transformed into an alternative algorithm, $\mathcal{A'}$, which also solves $k$-SA in the same model but without relying on the point-to-point primitives $\text{send}$ and $\text{receive}$. This transformation is feasible because the $\text{send}$ and $\text{receive}$ primitives can be trivially emulated using $B$. Moreover, the correctness of $\mathcal{A'}$.
\[1\] Unlike previous lemmas, this property is not proven for $\gamma_{k,N,B,B,i}$ in the general case.
results from the compositionality of $B$. Specifically, the executions of $\mathcal{A}'$, when projected onto the set of messages shared with $\mathcal{A}$ (excluding those utilized solely for simulating send/receive in $\mathcal{A}'$), are admitted by $\mathcal{CAMP}_{k+1}[B]$, thereby yielding identical results in $\mathcal{A}$ and $\mathcal{A}'$.
Consider an execution $\alpha_i$ where a process $p_i \in \Pi$ proposes $i$ to a $k$-SA object using $\mathcal{A}'$, while all other processes crash before taking any step. Due to the $k$-SA-TERMINATION property of the $k$-SA object, $p_i$ eventually decides on a value. The $k$-SA-VALIDITY property ensures this value is $i$. Denote by $m_{i,1}, ..., m_{i,N_i}$ the sequence of messages $p_i$ $B$-delivers in $\alpha_i$ prior to its decision.
Let $N = \max\{1, N_1, ..., N_{k+1}\}$, and suppose $B$ admits an $N$-solo execution $\beta$. Construct $\gamma$ as the sub-execution of $\beta$ containing, for each $p_i$, exactly $N_i$ of the $N$ messages $B$-broadcast by $p_i$, amongst those verifying the defining property of $N$-solo executions. Due to the BC-COMPOSITIONALITY property of $B$, $\gamma$ is an execution admitted by $B$, where each process $p_i$ $B$-delivers its $N_i$ messages before any message from other processes. Now, define $\delta$ from $\gamma$ by replacing each process $p_i$’s $N_i$ messages with the messages $m_{i,1}, ..., m_{i,N_i}$ from $\alpha_i$. The BC-CONTENT-NEutrality of $B$ ensures that $\delta$ is admitted by $B$. For each process $p_i$, $\alpha_i$ is indistinguishable from $\delta$, as both executions involve identical $B$-broadcast and $B$-delivery steps for $p_i$. Hence, when $\mathcal{A}'$ is executed on $\delta$, each $p_i$ decides on its own value $i$, leading to $k+1$ distinct decisions. This contradicts the $k$-SA-AGREEMENT property of $k$-SA. Therefore, such $\beta$ cannot exist, implying $B$ does not allow any $N$-solo execution.
**Lemma 10.** For all $k > 1$ and $N > 0$, if there exists an algorithm $\mathcal{B}$ that implements some broadcast abstraction $B$ in the model $\mathcal{CAMP}_{k+1}[k$-$SA]$, then $B$ admits an $N$-solo execution.
**Proof.** Assume $k > 1$ and $N > 0$, and suppose an algorithm $\mathcal{B}$ implements a broadcast abstraction $B$ in $\mathcal{CAMP}_{k+1}[k$-$SA]$. According to Lemmas 1-8, $\alpha_{k,N,B,B}$ constitutes an admissible $\mathcal{CAMP}_{k+1}[k$-$SA]$ execution, thus by $\mathcal{B}$’s correctness, $\beta_{k,N,B,B}$ is admitted by $B$. We aim to demonstrate that $\beta_{k,N,B,B}$ is $N$-solo. For each $i \in \{1, ..., k+1\}$, the loop starting on Line 5 halts by Lemma 7, but only after local$_{-}$del has been incremented at least $N$ times on Line 15, without having being reset on Line 25. Each of these incrementations corresponds to the $B$-delivery, by $p_i$, of its own message $m_{i,local}$-del. We now prove that these $(k+1) \times N$ messages satisfy the criteria in Definition 5.
Consider two distinct processes $p_i$ and $p_j$, assuming without loss of generality that $i < j$. Due to the sequential nature of the loop on Line 3, $p_i$ $B$-delivers all its own messages before $p_j$ even begins its $B$-broadcasts. Consequently, by the BC-VALIDITY property of $B$, $p_i$ completes delivering its messages before any of $p_j$’s. Lemmas 1-6 confirm that $\gamma_{k,N,B,B,j}$ upholds all safety properties of send/receive and k-SA objects, and is well-formed, indicating $\gamma_{k,N,B,B,j}$ is the prefix of an execution of $\mathcal{CAMP}_{k+1}[k$-$SA]$. In $\gamma_{k,N,B,B,j}$, $p_i$ does not $B$-broadcast its messages $m_{i,1}, ..., m_{i,N}$, hence $p_j$ does not $B$-deliver these messages, as ensured by $\mathcal{B}$’s correctness and BC-VALIDITY of $B$. Since $\alpha_{k,N,B,B}$ and $\gamma_{k,N,B,B,j}$ share identical $p_j$ steps before Line 26, in $\alpha_{k,N,B,B}$, $p_j$ $B$-delivers all its own messages before Line 26, without $B$-delivering any of the messages of $p_i$. Consequently, $\beta_{k,N,B,B}$, which includes only $B$-related steps from $\alpha_{k,N,B,B}$, is an $N$-solo execution admitted by $B$. \qed
Theorem 1. For all $n, k$ such that $1 < k < n$, there is no content-neutral and compositional broadcast abstraction equivalent to $k$-SA in the model $\mathcal{CAMP}_n[\emptyset]$.
Proof. Assume the existence of a content-neutral and compositional broadcast abstraction $B$ that is equivalent to $k$-SA in $\mathcal{CAMP}_n[\emptyset]$. Let $A$ be an algorithm implementing $k$-SA in $\mathcal{CAMP}_n[B]$, and $B$ be an algorithm implementing $B$ in $\mathcal{CAMP}_n[k$-SA]. Remark that the model $\mathcal{CAMP}_n[\emptyset]$ is functionally identical to the model $\mathcal{CAMP}_{k+1}[\emptyset]$ when $n - k - 1$ processes crash at the start of execution. Hence, the two algorithms would still be correct in the model $\mathcal{CAMP}_{k+1}[\emptyset]$. By Lemma 9, there exists an integer $N > 0$ such that $B$ does not admit any $N$-solo execution. Conversely, by Lemma 10, $B$ admits an $N$-solo execution. This contradiction implies the non-existence of such a broadcast abstraction $B$. \hfill $\Box$
5 Conclusion
This paper investigates the computational equivalence of any broadcast abstraction to $k$-set agreement ($k$-SA) in message-passing systems. Following the introduction of two new symmetry properties defining admissible broadcast abstractions—compositionality, content-neutrality—we demonstrate that no broadcast abstraction, which is both content-neutral and compositional, is computationally equivalent to $k$-set agreement when $1 < k < n$. This paper highlights a crucial distinction in the application of $k$-set agreement in shared memory versus message-passing systems: for $1 < k < n$, $k$-SA is equivalent to a broadcast abstraction in shared memory (specifically, $k$-BO broadcast), but no such equivalence exists in message-passing systems.
As Lamport famously observed in [17], “The concept of time (...) is derived from the more fundamental concept of the order in which events occur.” Therefore, at the abstraction level of message broadcasting in the system, each broadcast abstraction inherently provides a definition of time. On one end of the spectrum, broadcast abstractions that can be implemented solely through send and receive operations, such as Causal broadcast, offer processes a relativistic notion of time, defined by the “happened before” relation—a partial order. Conversely, at the other extreme where processes can utilize consensus, the set of broadcast events in Total Order broadcast forms an absolute timeline, known to all processes. Under this interpretation, $k$-SA represents a symmetric predicate on time—hence an elegant synchronization problem—when utilized within a shared-memory model. However, its inapplicability in message-passing systems questions the usefulness of $k$-SA in these contexts.
References
|
{"Source-Url": "https://hal.science/hal-04571653/file/main.pdf", "len_cl100k_base": 15283, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 76383, "total-output-tokens": 18333, "length": "2e13", "weborganizer": {"__label__adult": 0.00042176246643066406, "__label__art_design": 0.0004801750183105469, "__label__crime_law": 0.0004086494445800781, "__label__education_jobs": 0.0014848709106445312, "__label__entertainment": 0.00017368793487548828, "__label__fashion_beauty": 0.00021767616271972656, "__label__finance_business": 0.0004825592041015625, "__label__food_dining": 0.0004699230194091797, "__label__games": 0.0012407302856445312, "__label__hardware": 0.002079010009765625, "__label__health": 0.0009250640869140624, "__label__history": 0.0006213188171386719, "__label__home_hobbies": 0.00016450881958007812, "__label__industrial": 0.0006775856018066406, "__label__literature": 0.0008645057678222656, "__label__politics": 0.0004391670227050781, "__label__religion": 0.0007424354553222656, "__label__science_tech": 0.366455078125, "__label__social_life": 0.00013244152069091797, "__label__software": 0.01324462890625, "__label__software_dev": 0.6064453125, "__label__sports_fitness": 0.00034046173095703125, "__label__transportation": 0.0009455680847167968, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67377, 0.023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67377, 0.32693]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67377, 0.87393]], "google_gemma-3-12b-it_contains_pii": [[0, 974, false], [974, 2971, null], [2971, 6181, null], [6181, 9811, null], [9811, 13290, null], [13290, 16269, null], [16269, 19367, null], [19367, 22680, null], [22680, 26286, null], [26286, 30106, null], [30106, 33467, null], [33467, 36374, null], [36374, 37357, null], [37357, 40691, null], [40691, 44109, null], [44109, 47573, null], [47573, 51002, null], [51002, 55169, null], [55169, 59278, null], [59278, 62407, null], [62407, 65278, null], [65278, 67377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 974, true], [974, 2971, null], [2971, 6181, null], [6181, 9811, null], [9811, 13290, null], [13290, 16269, null], [16269, 19367, null], [19367, 22680, null], [22680, 26286, null], [26286, 30106, null], [30106, 33467, null], [33467, 36374, null], [36374, 37357, null], [37357, 40691, null], [40691, 44109, null], [44109, 47573, null], [47573, 51002, null], [51002, 55169, null], [55169, 59278, null], [59278, 62407, null], [62407, 65278, null], [65278, 67377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67377, null]], "pdf_page_numbers": [[0, 974, 1], [974, 2971, 2], [2971, 6181, 3], [6181, 9811, 4], [9811, 13290, 5], [13290, 16269, 6], [16269, 19367, 7], [19367, 22680, 8], [22680, 26286, 9], [26286, 30106, 10], [30106, 33467, 11], [33467, 36374, 12], [36374, 37357, 13], [37357, 40691, 14], [40691, 44109, 15], [44109, 47573, 16], [47573, 51002, 17], [51002, 55169, 18], [55169, 59278, 19], [59278, 62407, 20], [62407, 65278, 21], [65278, 67377, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67377, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
c1a2bfd57f52441ee59dd8e38f5d8f1b65012246
|
[REMOVED]
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00838806/file/main2.pdf", "len_cl100k_base": 11882, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 60824, "total-output-tokens": 14084, "length": "2e13", "weborganizer": {"__label__adult": 0.00044918060302734375, "__label__art_design": 0.00067901611328125, "__label__crime_law": 0.0006456375122070312, "__label__education_jobs": 0.00316619873046875, "__label__entertainment": 0.000255584716796875, "__label__fashion_beauty": 0.00026869773864746094, "__label__finance_business": 0.000621795654296875, "__label__food_dining": 0.0005741119384765625, "__label__games": 0.0015478134155273438, "__label__hardware": 0.0008759498596191406, "__label__health": 0.000964641571044922, "__label__history": 0.0006422996520996094, "__label__home_hobbies": 0.00023508071899414065, "__label__industrial": 0.0007109642028808594, "__label__literature": 0.00156402587890625, "__label__politics": 0.00044798851013183594, "__label__religion": 0.0007882118225097656, "__label__science_tech": 0.383544921875, "__label__social_life": 0.0002448558807373047, "__label__software": 0.024383544921875, "__label__software_dev": 0.57568359375, "__label__sports_fitness": 0.0003414154052734375, "__label__transportation": 0.0007562637329101562, "__label__travel": 0.0003178119659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46694, 0.02692]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46694, 0.63778]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46694, 0.87799]], "google_gemma-3-12b-it_contains_pii": [[0, 1065, false], [1065, 4021, null], [4021, 7915, null], [7915, 11283, null], [11283, 14400, null], [14400, 17158, null], [17158, 20226, null], [20226, 23285, null], [23285, 27030, null], [27030, 30324, null], [30324, 32944, null], [32944, 36557, null], [36557, 38199, null], [38199, 41643, null], [41643, 44493, null], [44493, 46694, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1065, true], [1065, 4021, null], [4021, 7915, null], [7915, 11283, null], [11283, 14400, null], [14400, 17158, null], [17158, 20226, null], [20226, 23285, null], [23285, 27030, null], [27030, 30324, null], [30324, 32944, null], [32944, 36557, null], [36557, 38199, null], [38199, 41643, null], [41643, 44493, null], [44493, 46694, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46694, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46694, null]], "pdf_page_numbers": [[0, 1065, 1], [1065, 4021, 2], [4021, 7915, 3], [7915, 11283, 4], [11283, 14400, 5], [14400, 17158, 6], [17158, 20226, 7], [20226, 23285, 8], [23285, 27030, 9], [27030, 30324, 10], [30324, 32944, 11], [32944, 36557, 12], [36557, 38199, 13], [38199, 41643, 14], [41643, 44493, 15], [44493, 46694, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46694, 0.02691]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a5848aced3d7d0714cb02dff77230e7632d653d0
|
Arrays Made Simpler: An Efficient, Scalable and Thorough Preprocessing
Benjamin Farinier¹, Robin David², Sébastien Bardin¹, and Matthieu Lemerre¹
¹ CEA, LIST, Software Safety and Security Lab, Université Paris-Saclay, France
firstname.lastname@cea.fr
² Quarkslab, Paris, France
rdavid@quarkslab.com
Abstract
The theory of arrays has a central place in software verification due to its ability to model memory or data structures. Yet, this theory is known to be hard to solve in both theory and practice, especially in the case of very long formulas coming from unrolling-based verification methods. Standard simplification techniques à la read-over-write suffer from two main drawbacks: they do not scale on very long sequences of stores and they miss many simplification opportunities because of a crude syntactic (dis-)equality reasoning. We propose a new approach to array formula simplification based on a new dedicated data structure together with original simplifications and low-cost reasoning. The technique is efficient, scalable and it yields significant simplification. The impact on formula resolution is always positive, and it can be dramatic on some specific classes of problems of interest, e.g. very long formula or binary-level symbolic execution. While currently implemented as a preprocessing, the approach would benefit from a deeper integration in an array solver.
1 Introduction
Context. Automatic decision procedures for Satisfiability Modulo Theory [4] are at the heart of almost all recent formal verification methods [11, 6, 10, 26]. Especially, the theory of arrays enjoys a central position in software verification as it allows to model memory or essential data structures such as maps, vectors and hash tables.
Intuitively, given a set $I$ of indexes and a set $E$ of elements, the theory of arrays describes the set $\text{Array } I \times E$ of all arrays mapping each index $i \in I$ to an element $e \in E$. Actually, logical arrays can be seen as infinite updatable maps implicitly defined by a succession of writes from an initial map. These arrays are defined by the two operations read (select) and write (store), whose semantic is given in Fig. 1 by so-called read-over-write axioms (ROW-axioms).
Despite its simplicity, the satisfiability problem for the theory of arrays is NP-complete¹. Indeed, it implies deciding (dis-)equalities between read and written indexes on read-over-write
¹Reduction of the program equivalence problem in presence of arrays (sequential, boolean case) to the equivalence case without arrays but with if-then-else operators, to SAT [17].
Arrays Made Simpler
B. Farinier, R. David, S. Bardin, M. Lemerre
select: Array $\mathcal{I} \times \mathcal{E} \rightarrow \mathcal{I} \rightarrow \mathcal{E}$
\[ \forall aie. \text{select}(\text{store } aie) i = e \]
store: Array $\mathcal{I} \times \mathcal{E} \rightarrow \mathcal{I} \rightarrow \mathcal{E} \rightarrow \text{Array } \mathcal{I} \times \mathcal{E}$
\[ \forall ai j e. (i \neq j) \Rightarrow \text{select}(\text{store } aie) j = \text{select } aj \]
Figure 1: The theory of arrays (ROW-axioms)
terms (ROW) of the form $\text{select}(\text{store } aie) j$, potentially yielding nested case-splits. Standard
decision procedures for arrays consist in eliminating as much ROW as possible through a
preprocessing step [22], using axioms from Fig. 1 as rewriting rules, and then enumerating
all possible (dis-)equalities in ROW, yielding a potentially huge search space — the remaining
ROW-axioms can be introduced lazily to mitigate this issue [9].
Problem and challenge. Yet, this is not satisfactory when considering very long chains
of writes, as can be encountered in unfolding-based verification techniques such as Symbolic
Execution (SE) [10] or Bounded Model checking [11] — the case of Deductive Verification is
different since user-defined invariants prevent the unfolding. The theory of arrays can then
quickly become a bottleneck of constraint solving. Especially, the ROW-simplification step is
very often limited, for two reasons. First, exploring for every read in a backward manner the
corresponding list of all writes yields a quadratic time cost (in the number of array operations) and
therefore it does not scale to very long formulas. This is a major issue in practice as, for example,
Symbolic Execution over malware or obfuscated programs [27, 1, 29] may have to consider
execution traces of several millions of instructions, yielding formulas with several hundreds of
thousands of array operations. Note also that bounding the backward exploration misses too
many ROW-simplifications. Second, (dis)-equalities can be rarely decided during preprocessing
as standard methods rely on efficient but crude approximate equality checks (typically, syntactic
term equality), limiting again the power of these approaches. With such checks, index equality
may be sometimes proven, but disequality can never be — except in the very restricted case of
constant-value indexes.
Our proposal. We present a novel approach to ROW-simplification named FAS (Fast Array
Simplification), allowing to scale and to simplify much more ROW than previous approaches.
The technique is based on three key components:
- A re-encoding of write sequences (total order) as sequences of packs of independent writes
(partial order), together with a dedicated data structure (map list) ensuring scalability;
- A new simple normalization step (base normalization) allowing to amplify the efficiency of
syntactic (dis-)equality checks;
- A lightweight integration of domain-based reasoning over packs yielding even more successful
(dis-)equality checks for only a slight overhead.
Experimental results demonstrate that FAS scales over very large formulas (several hundreds of
thousands of ROW) typically coming from Symbolic Execution and can yield very significant
gains in terms of runtime — possibly passing from hours to seconds.
**Contribution.** Our contribution is two-fold:
- We present in detail the new **fas** preprocessing step for scalable and thorough array constraint simplification (Sec. 4), along with its three key components: dedicated data structure (Sec. 4.1), base normalization (Sec. 4.2) and domain reasoning (Sec. 4.4);
- We experimentally evaluate **fas** in different settings for three leading SMT solvers (Sec. 5).
The technique is fast and scalable, it yields a significant reduction of the number of rows with always a positive impact on resolution time. This impact is even dramatic for some key usage scenarios such as SE-like formulas with small **timeout** or very large size.
**Discussion.** In our view, **fas** reaches a sweet spot between efficiency and impact on resolution. Experiments demonstrate that even major solvers benefit from it, with gains ranging from slight to very high depending on the setting. While presented as a preprocessing, **fas** would clearly benefit from a deeper integration inside an array solver, in order to take advantage of more simplification opportunities along the resolution process.
## 2 Motivation
Let us detail how the formula in the left part of Fig. 2 can be simplified into the formula in the right part using our new **fas** simplification procedure for arrays. We focus on the last assertion which involves a read on \( \text{mem}_1 \) at index \( j \). According to arrays semantics (Fig. 1), we must try to decide whether \( i \) and \( j \) (resp. \( \text{eax}_0 \) and \( i \)) are equal or different. The standard syntactic equality check is not conclusive here. But \( \text{esp}_1 \equiv \text{esp}_0 - 64 \), therefore \( j \) can be rewritten into \( \text{esp}_0 \) (base normalization in **fas**), which is exactly \( i \). Hence \( i = j \) is proven. By applying array axioms, we deduce that \( \text{eax}_0 \equiv 1415 \), and the last assertion becomes \( \text{select mem}_1 1415 = 9265 \). We now try to decide whether \( i \) and \( 1415 \) are equal or different. Again, the standard syntactic equality check fails. Yet, by the first assertion we deduce that \( i > 61424 \) (domain propagation in **fas**), leading to \( i \neq 1415 \). Therefore \( \text{mem}_1 \) is safely replaced by \( \text{mem}_0 \) in the last assertion which becomes \( \text{select mem}_0 1415 = 9265 \). Finally, as assertions in the formula now only refer to \( \text{esp}_0 \) and \( \text{mem}_0 \), we erase all the intermediate definitions to obtain the simplified formula.
This little mental gymnastic emphasises two important aspects of ROW-simplifications. First, simplifications often require (dis-)equality reasoning beyond pure syntactic equality. Second, simplifications involve a backward reasoning through the formula which may become prohibitive on large formulas if not treated with care (not shown here, up to 1h simplification time in Fig. 10). Our proposal focuses especially on these two aspects.
3 Background
The theory of arrays has been introduced in Fig. 1 (Sec. 1). As already stated, the main difficulty for reasoning over arrays comes from terms of the form \( \text{select} (\text{store} \ a \ i \ e) \ j \), called read-over-write (ROW), since depending on whether \( i = j \) holds or not, the term evaluates to \( e \) (SELECT-HIT) or to \( \text{select} \ a \ j \) (SELECT-MISS). Array (formula) simplification consists in removing as many ROW as possible before resolution by proving (when possible) the validity of the (dis-) equality of such pairs of indexes \((i, j)\) and rewrite the term accordingly. Such simplification procedures critically depend on two factors: 1. the equality check procedure, and 2. the underlying representation of an array and its revisions arising from successive writes.
The equality check must be both efficient — simplifying a formula must be cheaper than solving it, and correct — all proven (dis-)equalities must indeed hold. It can thus only be approximated, i.e. it is incomplete and may miss some valid (dis-)equalities. The standard solution is to rely on syntactic term equality checking. Obviously this is a crude approximation: disequality can never be proven (but for constant-value indexes), and as exemplified in Sec. 2, small syntactic variations of the same value can hinder proving equalities.
We present now two (unsatisfactory) standard array representations, coming either from the decision procedure community (the list representation: generic but slow) or from the symbolic execution community (the map representation: efficient but restricted).
Arrays represented as lists. The standard representation of an array and its subsequent revisions is basically a “store-chain”, the linked list of all successive writes in the array. Hence a fresh array is simply an empty list, while the array obtained by writing an element \( e \) at index \( i \) in array \( A \) is represented by a node containing \((i, e)\) and pointing to the list representing \( A \). Fig. 3 illustrates this encoding. This approach is very generic — it can cope with symbolic indexes, and it is the one implicitly used inside array solvers. In order to simplify a read at index \( j \) on array \( A \), one must decide whether \( i = j \) is valid for the pair \((i, e)\) inside the head of the list representing \( A \). If we succeed, then we can apply the ROW axiom and replace the read by value \( e \). Otherwise, we try to decide whether \( i \neq j \) is valid. If this is the case, then we use the second ROW axiom and move backward along the linked list. If not, the simplification process stops.
An inherent problem with this representation is the increase in the simplification cost as the number of writes rises. As mentioned in Sec. 2, this cost becomes prohibitive when dealing with large formulas. Indeed, one might be forced for each read to fully explore the write-list backward, yielding a quadratic worst case time cost. This is especially unfortunate because this worst case arises in situations where the simplification could perform the best, e.g. when all disequalities between indexes hold so that all reads could be replaced with accesses to the initial array (no more ROW). A workaround is to bound the backward exploration of the write-list, which reduces the worst case time cost to linear, but at the expense of limited simplifications (Fig. 10, Sec. 5.4).
Arrays represented as maps. In the restricted case where all indexes of reads and writes are constant values, a persistent map with logarithmic lookup and insertion can be used to simplify all row occurrences — yielding fast and scalable simplification. This representation is used in symbolic execution tools [10] with strong concretization policy [23, 14] during the formula generation step in order to limit the introduction of arrays, but it is not suited to general purpose array solvers as it cannot cope with symbolic indexes.
Here, a freshly declared array is represented by an empty map where indexes and elements sorts correspond to those of the array, and the array obtained after a write of element \( e \) at index \( i \) is simply represented by the map of the written array in which \( e \) is added at index \( i \), as illustrated in Fig. 4. Then the simplification of a read at index \( j \) becomes its substitution by the element mapped to \( j \). In the case where no such element is found, the read occurs on the initial array. Therefore, we can either replace the array by the initial one or replace the read by a fresh symbol. In the latter case, we have to ensure that two reads are replaced by the same symbol if and only if they occur at the same index.
4 Efficient simplification for read-over-write
We now present FAS (Fast Array Simplification), an efficient approach to read-over-write simplification. FAS combines three key ingredients: a new representation for arrays as a list of maps to ensure scalability, a dedicated rewriting step (base normalization) geared at improving the conclusiveness of syntactic (dis-)equality checks between indexes, and lightweight domain reasoning to go beyond purely syntactic checks.
4.1 Dedicated data structure: arrays represented as lists of maps
We look here for an array representation combining the advantages of the list representation (genericity) and the map representation (efficiency) presented in Sec. 3. As a preliminary remark, we can note that the map representation can be extended from the constant-indexes case to the case where all indexes of reads and writes are pairwise comparable. By comparable we mean that a binary comparison operator \( \prec \) is defined and decidable for every pair of indexes in the formula. Yet, if such a hypothesis might sometimes be satisfied, it is not necessary the case, for example when indexes contain uninterpreted symbols.
The representation of arrays we propose, lists of maps (denoted map lists), aims precisely at combining advantages of maps when all indexes are pairwise comparable while being as general as lists in other situations. Our array representation can be thought of as a list of packs of independent writes. The idea is that sets of comparable (and proven different) indexes can be packed together into map-like data structures, allowing efficient (i.e. logarithmic) search on these packs of indexes during the application of row-like simplification rules. While the idea is presented here in general, we instantiate it in Sec. 4.3, Fig. 7, and in Sec. 4.4, Fig. 8.
In this representation, nodes of the list are maps from pairwise-comparable indexes to written elements, as illustrated in Fig. 5. A fresh array is represented as an empty list (of maps). The array obtained after the write of element \( e \) at index \( i \) is defined by:
• If \( i \) is comparable with all other indexes of elements already inserted in the map at head position, then we add the element \( e \) at index \( i \) into this map (\text{STORE-HIT});
• Else we add to the list a fresh node containing the singleton map of index \( i \) to element \( e \) (\text{STORE-MISS}).
For a read at index \( j \), the simplification of \textsc{row} is done as follows:
• If indexes in the head position map of the list representing the array are all comparable with \( j \), then if \( j \) belongs to this map we substitute the read by the associated element (\text{SELECT-HIT}), else we re-iterate on the following node in the list (\text{SELECT-MISS});
• Else, we abort (\text{SELECT-ABORT}).
A first version of the dedicated (dis-)equality checks we use is presented in Sec. 4.2. The whole \textsc{fas} procedure, together with the associated notion of comparable, is formally described in Sec. 4.3, and a refinement using more semantic checks is presented in Sec. 4.4.
Intuitively, the benefit of this representation is that its behavior varies between the one of the list representation and the one of the map representation, depending on the proportion of indexes pairwise comparable. Indeed, when all indexes are pairwise comparable, the list only contains a single map of all indexes, which is equivalent to the map representation. And when none of the index pairs are comparable, the list is composed of singleton maps, which is equivalent to the list representation.
From a technical point of view, map lists enjoys several good properties:
**Property 1** (Compactness). By construction, all indexes in any map of a map list are pairwise comparable, while indexes from adjacent maps are never comparable.
**Property 2** (Complexity). Assuming that 1. we can decide efficiently (constant or logarithmic time) whether an index is comparable to all the other indexes of a given map, 2. that \(<\) between comparable terms can also be efficiently decided (constant or logarithmic time), and 3. a decent implementation of maps (logarithmic time insertion and lookup), then:
• Array writes are computed in logarithmic time (map insertion) — where the standard list approach requires only constant time;
• Array reads are also computed in logarithmic time (map lookup) as \text{SELECT-MISS} can only led to \text{SELECT-ABORT} (Prop. 1) — where the standard list approach requires linear time.
In the case where all indexes are pairwise comparable, our representation contains a single map and simplification cost for \( r \) reads and \( w \) writes is bounded by \( r \cdot \ln(w) \), while the list approach requires a quadratic \( r \cdot w \) time.
Finally, map lists allow to easily take into account some cases of write-over-write (a write masked by a later write at the same index can be ignored if no read happens in-between), while it requires a dedicated and expensive \( (w^2) \) treatment with lists.
4.2 Approximated equality check and dedicated rewriting
We consider as equality check a variation of syntactic term equality, namely **syntactic base/off-set equality**, which is regarding two terms $t_1$ and $t_2$ defined as follows:
- If $t_1 \triangleq \beta_1 + \iota_1$ and $t_2 \triangleq \beta_2 + \iota_2$ — where $\beta_1, \beta_2$ are arbitrary terms (bases) and $\iota_1, \iota_2$ are constant values (offsets), and $\beta_1 = \beta_2$ (syntactically) then return the result of $\iota_1 = \iota_2$,
- Otherwise the check is not conclusive.
This equality check is correct and efficient, and it strictly extends syntactic term equality — the result is more often conclusive. Actually, in practice it turns out that this extension is significant. Indeed, a common pattern in array formulas coming from software analysis is reads or writes at indexes defined as the sum of a base and an offset (think of C or assembly programming idioms). Hence, dealing with such terms is particularly interesting for verification-oriented formulas.
**Dedicated rewriting: base normalization.** Yet, this equality check still suffers from the rigidity of syntactic approaches. Therefore it is worthwhile to normalizes indexes as much as possible by applying a dedicated set of rewriting rules called **base normalization** (rebase), cf. Fig. 6. These rules are essentially based on limited inlining of variables together with associativity and commutativity rules of $+/-$ operators, the goal being to minimize the number of possible bases in order to increase the “conclusiveness” of our equality check, as done in example Sec. 2.
```
if $u \triangleq v$ then $u + k \leadsto v + k$ alias inlining
if $u \triangleq v + l$ then $u + k \leadsto v + (k + l)$ base/offset inlining
$- (x + k) \leadsto (-k) - x$ constant negation
$(x + k) + l \leadsto x + (k + l)$ constant packing
$(x + k) + y \leadsto (x + y) + k$ constant lifting
$(x + k) - (y + l) \leadsto (x + y) + (k + l)$ base/offset addition
$(x + k) - (y + l) \leadsto (x - y) + (k - l)$ base/offset subtraction
...
```
Figure 6: Example of base normalization rules. $u, v$ are variables, $k, l$ are constant values and $x, y$ are terms. Non-inlining rules reduce either the number of operators or the depth of constant values, ensuring termination. Note that $(-k), (k + l), (k - l)$ are constant values, not terms.
**Optimization: sub-term sharing.** Sharing of sub-terms consists in giving a common name to two syntactically equal terms. This improvement is not new, but has an original implication in this context. Besides easing the decision of equality between terms, it remedies to an issue induced by the simplification of ROW. Indeed, the simplification of ROW can be seen as a kind of “inlining” stage, which may in some cases lead to terms size explosion. This problem arises when after a write of element $e$ at index $i$, several reads at index $i$ are simplified. It may result in numerous copies of term $e$, term which may contain itself other reads to simplify. By naming and sharing terms read and written in arrays, the sub-term sharing phase prevents this issue. **Experiments in Sec. 5.4 demonstrate the practical interest on very large formulas.**
4.3 The FAS procedure
Using the generic algorithm of Sec. 4.1 with equality check and normalization from Sec. 4.2, we formalize FAS as the set of inference rules presented in Fig. 7. Two terms will be said comparable when they share the same base $\beta$. **STORE-HIT and STORE-MISS** rules explain how
to update the representation of an array on writes, and select-hit and select-miss rules explain how to simplify reads. Store rules are presented as triples \( \{ \Lambda \} \) \( \text{store} \ a \ i \ e \ \{ \Lambda' \} \) where \( \Lambda' \) is the representation for \( \text{store} \ a \ i \ e \) when \( \Lambda \) is the representation for \( a \). Select rules are presented as triples \( \{ \Lambda \} \vdash \text{select} \ a \ i \sim e \) meaning that \( \text{select} \ a \ i \) can be rewritten in \( e \) when \( \Lambda \) is the representation for \( a \).
\[
i = \beta + \iota \quad \iota \text{ a constant}
\]
\[
\{ (\Gamma, \beta, b) :: \Lambda \} \text{store} \ a \ i \ e \ \{ (\Gamma \ [i \leftarrow e], \beta, b) :: \Lambda \} \quad \text{STORE-HIT}
\]
\[
i = \alpha + \iota \quad \alpha \neq \beta
\]
\[
\{ (\Gamma, \beta, b) :: \Lambda \} \text{store} \ a \ i \ e \ \{ (\emptyset [i \leftarrow e], \alpha, a) :: (\Gamma, \beta, b) :: \Lambda \} \quad \text{STORE-MISS}
\]
\[
\Gamma \ [i] = e \quad i = \beta + \iota
\]
\[
\{ (\Gamma, \beta, b) :: \Lambda \} \vdash \text{select} \ a \ i \sim e \quad \text{SELECT-HIT}
\]
\[
\{ \Lambda \} \vdash \text{select} \ b \ i \sim e \quad \Gamma \ [i] = \emptyset \quad i = \beta + \iota
\]
\[
\{ (\Gamma, \beta, b) :: \Lambda \} \vdash \text{select} \ a \ i \sim e \quad \text{SELECT-MISS}
\]
Figure 7: Inference rules for \text{select} and \text{store} using the map list representation
The representation \( (\Gamma, \beta, b) :: \Lambda \) we use is a specialized version of the map list representation that we just defined, where \( \Gamma \) is a map, \( \beta \) is the common base of indexes present in \( \Gamma \), \( b \) the last revision of the array written at a different index than \( \beta \), and where \( \Lambda \) is the tail of the list. Assuming that all indexes have been normalized, if the base of the write index is equal to \( \beta \), then the store-hit rule applies and we add the written element into \( \Gamma \). If the base of the write index is not equal to \( \beta \), then the store-miss rule applies. We add as a new node of the list a singleton map containing only the written element, the new base and the written array. For row-simplification, the select-hit rule states that if the base of the read index is equal to \( \beta \), and if there is an element in \( \Gamma \) mapped to this index, then we return this element. Finally the select-miss rule states that if there is no such element, then we return the simplified read on \( b \) at the same index, using \( \Lambda \) as the representation.
4.4 Refinement: adding domain-based reasoning
While our equality check performs well for deciding (dis-)equalities between indexes with a same base, it behaves poorly with different bases. So we extend FAS in Fig. 8 with domain-based reasoning abilities. Basically, maps are now equipped with abstract domains over-approximating their sets of (possible) concrete indexes, and the data structure is now a list of sets of maps, all maps in a set having different bases but disjoint sets of concrete indexes. When syntactic base/offset equality check is not conclusive, domain intersection may be used to prove disequality.
We borrow ideas from Abstract Interpretation [13]. Given a concrete domain \( D \), an abstract domain is a complete lattice \( \langle D^\sharp, \sqsubseteq, \sqcup, \sqcap, T, \bot \rangle \) coming with a monotonic concretization function \( \gamma : D^\sharp \rightarrow \mathcal{P}(D) \) such that \( \gamma(T) = D \) and \( \gamma(\bot) = \emptyset \). An element of an abstract domain is called an abstract value. In the following the concrete domain is the set of array indexes.
The representation is now a list of sets of tuples \( (\Gamma, \beta, b, \Gamma^\sharp) \) where \( \Gamma \), \( \beta \) and \( b \) are a map, a base and an array as previously described, and where \( \Gamma^\sharp \) is the joined abstract value of indexes.
We append first with congruence (e.g. \(\text{Arrays Made Simpler}\) B. Farinier, R. David, S. Bardin, M. Lemerre good candidate for refining our method at an affordable cost. The general difficulty is to find a sweet spot between the potential gain (more checks become conclusive) and the overhead of propagation. As a rule of thumb, non-relational domains should be tractable and useful. Especially, combining multi-intervals with congruence (e.g. \(x \equiv 5 \mod 8\)) or bit-level information (e.g. the second bit of \(x\) is 1) \([3]\) is a good candidate for refining our method at an affordable cost.
**Domain propagation.** So far, we did not explained how abstract values are computed. The literature on abstract domains is plentiful \([28]\). Nevertheless we present in Fig. 9 propagation rules for a specific abstract domain, the well-known domain of (multi-)intervals — used in our implementation (note that operations are performed over bitvectors of a known size \(N\), and \(+\) is the wraparound addition). The general difficulty is to find a sweet spot between the potential gain (more checks become conclusive) and the overhead of propagation. As a rule of thumb, non-relational domains should be tractable and useful. Especially, combining multi-intervals with congruence (e.g. \(x \equiv 5 \mod 8\)) or bit-level information (e.g. the second bit of \(x\) is 1) \([3]\) is a good candidate for refining our method at an affordable cost.
\[
i = \beta + i \\
i \text{ a constant}
\]
\[
\Theta = \left\{ \langle \Sigma, \sigma, \Sigma^\|^i \mid \sigma \neq \beta \land \Sigma^\|^i \cap i^\# = \bot \right\}
\]
\[
\Xi = \left\{ \langle \Sigma, \sigma, \Sigma^\|^i \mid \sigma \neq \alpha \land \Sigma^\|^i \cap i^\# \neq \bot \right\}
\]
\[
\left\langle \left( \langle \Gamma, \beta, \Gamma^\# \rangle \uplus \Theta \uplus \Xi \right) :: \Lambda \right\rangle \left\rangle \text{ store } a \ i \ e \right\rangle \left\langle \left( \langle \Gamma \uparrow e \downarrow, \beta, b, \Gamma^\# \uplus \Theta \right) :: \Xi :: \Lambda \right\rangle
\]
\[
i = \alpha + i \\
i \text{ a constant}
\]
\[
\Theta = \left\{ \langle \Sigma, \sigma, \Sigma^\|^i \mid \sigma \neq \alpha \land \Sigma^\|^i \cap i^\# = \bot \right\}
\]
\[
\Xi = \left\{ \langle \Sigma, \sigma, \Sigma^\|^i \mid \sigma \neq \alpha \land \Sigma^\|^i \cap i^\# \neq \bot \right\}
\]
\[
\left\langle \left( \langle \Theta \uplus \Xi \right) :: \Lambda \right\rangle \left\rangle \text{ store } a \ i \ e \right\rangle \left\langle \left( \langle \Theta \uparrow e \downarrow \downarrow, \alpha, a, i^\# \uplus \Theta \right) :: \Xi :: \Lambda \right\rangle
\]
\[
\Gamma[i] = e \quad i = \beta + i
\]
\[
\left\langle \left( \langle \Gamma, \beta, b, \Gamma^\# \rangle \uplus \Xi \right) :: \Lambda \right\rangle \left\rangle \text{ select } a \ i \sim \ e \right\rangle
\]
\[
\{ \Lambda \} \vdash \text{ select } b \ i \sim \ e \quad \quad \Gamma[i] = \emptyset \quad i = \beta + i
\]
\[
\left\langle \left( \langle \Gamma, \beta, b, \Gamma^\# \rangle \uplus \Xi \right) :: \Lambda \right\rangle \left\rangle \text{ select } a \ i \sim \ e \right\rangle
\]
\[
\{ \Lambda \} \vdash \text{ select } b \ i \sim \ e \quad i = \beta + i \quad \Theta = \left\{ \langle \Sigma, \sigma, \Sigma^\|^i \mid \sigma \neq \beta \land \Sigma^\|^i \cap i^\# = \bot \right\}
\]
\[
\{ \Theta :: \Lambda \} \vdash \text{ select } a \ i \sim \ e
\]
Figure 8: Inference rules for select and store using domains
in \(\Gamma\). Given a write at index \(i\), the set at head position in the list is split into: 1. \(\Theta\) the set of tuples whose map abstract value does not overlap with \(i^\#\), the abstract value of \(i\), 2. \(\Xi\) the set of tuples whose map abstract value overlap with \(i^\#\), and if it exists, 3. the tuple \(\langle \Gamma, \beta, b, \Gamma^\# \rangle\) where \(\beta\) is after normalization the base of \(i\). If this tuple exists, then the store-HIT rule applies. We update \(\Gamma\) as previously and its associated abstract value becomes the join value of \(\gamma^\#\) and \(\beta^\#\). We append first \(\Xi\) alone onto the list, and then \(\Theta\) together with the updated tuple. Else, the store-MISS rule applies. Again we first append \(\Xi\) alone, then \(\Theta\) together with a new singleton map, the new base, the written array and the write index abstract value. Finally, select-HIT and select-MISS are similar to previous ones, but we add a new rule select-SKIP. This rule states that, if the read index abstract value do not overlap with maps abstract values in the set at head position, then we drop the head and reiterate on the tail of the list.
Note that if abstract values in these rules are set to \(\top\), then \(\Theta\) is always empty and we get back to the previous inference rules. Also the complexity of reads becomes linear in the list size, as domains can prove disequality at each node of the list. Yet, it is not a problem in practice, as demonstrated by experimental evaluation in Sec. 5.
Let $i, j$ two bitvectors of size $N$, with $i^\sharp = [m_i, M_i], j^\sharp = [m_j, M_j]$ where $0 \leq m_{i,j} \leq M_{i,j} \leq 2^N$,
- $c^\sharp = [c, c]$ for any constant $c$
- $v^\sharp = [m_i, M_i]$ if $i \leq v \leq j$
- $(\text{extract}_{l,h} i)^\sharp = [0, 2^{h-l+1} - 1]$ if $(M_i \gg l) - (m_i \gg l) \geq 2^{h-l+1}$
- $= [\text{extract}_{l,h} (m_i), \text{extract}_{l,h} (M_i)]$ if $\text{extract}_{l,h} (M_i) \geq \text{extract}_{l,h} (m_i)$
- $\cup [\text{extract}_{l,h} (m_i), 2^{h-l+1} - 1]$ otherwise
- $(i + j)^\sharp = [m_i + m_j, M_i + M_j]$ if $M_i + M_j < 2^N$
- $= [m_i + m_j - 2^N, M_i + M_j - 2^N]$ if $m_i + m_j \geq 2^N$
- $\cup [0, M_i + M_j - 2^N]$ otherwise
Figure 9: Examples of propagation for intervals. These propagations are extended to multi-intervals by distribution for unary operators and pairwise distribution for binary operators.
5 Implementation and experimental evaluation
5.1 Implementation
In order to evaluate the efficiency of our approach, we implemented FAS (with the different representations presented so far and the abstract domain of multi-intervals) as a preprocessor for SMT formulas belonging to the QF_ABV logic (quantifier-free formulas over the theory of bitvectors and arrays) — as typical choice in software verification. In that setting, all bitvector values and expressions have statically known sizes, arithmetic operations are performed modulo and values can “wraparound”. For reproducibility purposes source code and benchmarks are available online. The implementation comprises 6,300 lines of OCaml integrated into the TFML SMT formula preprocessing engine [19], part of the BINSEC symbolic execution tool [15]. It comprises all simplifications and optimizations described in Sec. 4, including map lists, base normalization, sub-term sharing and domain propagation (multi-intervals) over bitvectors. Note that our normalization rules (Sec. 4.2) and domain propagators (Sec. 4.4) correctly handle possible arithmetic wraparounds.
An advantage operating as a preprocessor is to be independent of the underlying solver used for formula resolution, and therefore allow us to evaluate the impact of our approach with several of them. A drawback is that we do not have access to various internal components of the solver, like accessing the model under construction, and cannot use them to refine our approach. In the long term, a deeper integration into a solver would be more suitable.
5.2 Experimental setup
We evaluated FAS performances under three criteria: 1. simplification thoroughness, measured by the reduction of the number of row terms; 2. simplification impact, measured by resolution time before and after simplification; 3. simplification cost, measured by the total time of simplification.
We devise three sets of experiments corresponding to three different scenarios: mid-sized formulas generated by the SE-tool BINSEC [15] from real executables programs — typical of test generation and vulnerability finding (cf. Sec. 5.3), very large formulas generated by BINSEC from
---
2http://benjamin.farinier.org/lpar2018/
very long traces — typical of reverse and malware analysis (cf. Sec. 5.4), and formulas taken from the SMT-LIB benchmarks (cf. Sec. 5.5). Regarding experiments over SE-generated formulas, we also consider three variants corresponding to standard concretization / symbolization policies [14] (cf. Sec. 5.3), as well as different timeout values. Experiments are carried out on an Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz. We consider three of the best SMT solvers for the QF_ABV theory, namely Boolector [8], Yices [18] and Z3 [16]. Note that the impact of map lists (w.r.t. a list-based representation) and sub-term sharing will be evaluated only in Sec. 5.4, as they are interesting only on large enough formulas. Moreover, the map list representation impacts only preprocessing time, not its thoroughness: assuming preprocessing does not time out (and rebase and domains are used), FAS and FAS-list will carry out the same simplifications.
A note on problem encoding. As already stated, we consider quantifier-free formulas over the theory of bitvectors and arrays coming from the encoding of low-level software verification problems. Arithmetic operations are performed modulo and values can “wraparound”. Also, since memory accesses in real hardware are performed at word-level (reading 4 or 8 bytes at once), they are modelled here by successive byte-level reads and writes — allowing to take properly into account misaligned or overlapping accesses. Finally, memory is often modelled as a single logical array of bytes (i.e., bitvector values of size 8), without any a priori distinction between stack and heap (this is the case for all examples from Binsec).
5.3 Medium-size formulas from SE
We consider here typical formulas coming from symbolic execution over executable codes. While mid-sized (max. 3.42 MB, avg. 1.40 MB), these formulas comprise quite long sequences of nested row (max. 11,368 row, avg. 4,726 row) as there is only one initial array (corresponding to the initial memory of the execution, i.e. a flat memory model). More precisely, we consider 6,590 traces generated by Binsec [15] from 10 security challenges (e.g. crackme such as Manticore or Flare-On) and vulnerability finding problems (e.g. GRUB vulnerability), and from these traces we generate 3 x 6,590 formulas depending on the concretization / symbolization policies used in Binsec to generate them: concrete (all array indexes are set to constant values), symbolic (symbolic array indexes), and interval (array indexes bound by intervals). We consider two different timeout: 1,000 seconds (close to SMT-LIB benchmarks setting) and 1 second (typical of program analysis involving a large number of solver calls, e.g. deductive verification or symbolic execution).
The whole results are presented in Table 1 (timeout 1,000 sec.) and Table 2 (timeout 1 sec.). Note that resolution time does not include timeout. Columns FAS and FAS-itv represents respectively our technique (map list, rebase and sharing) potentially improved with domain reasoning based on intervals (FAS-itv). The default column represents a minimal preprocessing step consisting of constant propagation and formula pruning, without any array simplification.
We can see that:
- Simplification time is always very low on these examples (340 sec. for 3 x 6,590 formulas, in avg. 0.017 sec. per formula). Moreover, it is also very low w.r.t. resolution time (taking timeout into account: Boolector 6%, Yices 4% and Z3 0.3%) and largely compensated by the gains in resolution, but for one case where Boolector performs especially well (concrete formulas: cost of 118% — not compensated by gains in resolution).
- Formula simplification is indeed thorough: as a whole, the number of row is reduced by a factor 5 (2.5 without interval reasoning). The simplification performs extremely well, as
Arrays Made Simpler
B. Farinier, R. David, S. Bardin, M. Lemerre
Table 1: 6,590 x 3 medium-size formulas from SE, with \textit{timeout} = 1,000 sec.: simplification time (in seconds), number of \textit{ROW} after simplification, number of \textit{timeout} and resolution time (in seconds, without \textit{timeout})
<table>
<thead>
<tr>
<th>simpl. time</th>
<th>#timeout and resolution time</th>
<th>#ROW</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td>default</td>
<td>61</td>
<td>0</td>
</tr>
<tr>
<td>FAS</td>
<td>85</td>
<td>0</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>111</td>
<td>0</td>
</tr>
<tr>
<td>default</td>
<td>65</td>
<td>0</td>
</tr>
<tr>
<td>FAS</td>
<td>99</td>
<td>0</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>118</td>
<td>0</td>
</tr>
<tr>
<td>default</td>
<td>61</td>
<td>0</td>
</tr>
<tr>
<td>FAS</td>
<td>91</td>
<td>0</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>111</td>
<td>0</td>
</tr>
<tr>
<td>total</td>
<td>187</td>
<td>0</td>
</tr>
<tr>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>93</td>
</tr>
<tr>
<td>FAS</td>
<td>2</td>
<td>24</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>2</td>
<td>23</td>
</tr>
<tr>
<td>total</td>
<td>1,230</td>
<td>730</td>
</tr>
<tr>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td></td>
<td>593</td>
<td>1,213</td>
</tr>
<tr>
<td>FAS</td>
<td>52</td>
<td>602</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>1,947</td>
<td>575</td>
</tr>
<tr>
<td>total</td>
<td>1,888</td>
<td>618</td>
</tr>
<tr>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td></td>
<td>1,597</td>
<td>647</td>
</tr>
<tr>
<td>FAS</td>
<td>1,451</td>
<td>1,273</td>
</tr>
</tbody>
</table>
Table 2: 6,590 x 3 medium-size formulas from SE, with \textit{timeout} = 1 sec.
<table>
<thead>
<tr>
<th>#timeout and resolution time</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Boolector</td>
</tr>
<tr>
<td>default</td>
<td>2</td>
</tr>
<tr>
<td>FAS</td>
<td>2</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>2</td>
</tr>
<tr>
<td>total</td>
<td>1,230</td>
</tr>
<tr>
<td></td>
<td>Boolector</td>
</tr>
<tr>
<td></td>
<td>593</td>
</tr>
<tr>
<td>FAS</td>
<td>52</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>1,947</td>
</tr>
<tr>
<td>total</td>
<td>1,888</td>
</tr>
<tr>
<td></td>
<td>Boolector</td>
</tr>
<tr>
<td></td>
<td>1,597</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>1,451</td>
</tr>
</tbody>
</table>
expected, on \textit{concrete} formulas, where almost all \textit{ROW} instances are solved at preprocessing time. On \textit{interval} formulas, the number of \textit{ROW} is sliced by a factor 4, and a factor 3 in the case of \textit{full symbolic} formulas.
- The \textit{impact} of the simplification over resolution time (for a 1,000 sec. \textit{timeout}) varies greatly from one solver to another, but it is always significant: factor 1.5 for Boolector, factor 1.9 for Yices with one fewer \textit{timeout}, up to a factor 3.8 and 32 fewer \textit{timeout} for Z3. Especially, on interval formulas \textit{FAS} with domain reasoning yields a 3.4 (resp. 3.3) speed factor for Boolector (resp. Yices), while Z3 on this category enjoys a 4.1 speedup together with 14 fewer \textit{timeout}. Interestingly, domain reasoning is useful also in the case of fully symbolic formulas, i.e. with no explicit introduction of domain-based constraints.
Results for a 1 sec. \textit{timeout} follows the same trend but they are much more significant
(number of timeout: Boolector -48%, Yices -47% and Z3 -21%), and they become especially dramatic on interval formulas (number of timeout: Boolector -96%, Yices -90% and Z3 -44%).
Focus on specific cases. We highlight now a few interesting scenarios where FAS performs very well, especially formulas generated from the GRUB vulnerability (Table 3, 753 formulas) and formulas representing the inversion of a crypto-like challenge (Table 4, 139 formulas). Regarding GRUB, while basic FAS does not really impact resolution time, adding domain-based reasoning does allow a significant improvement — Boolector, Yices and Z3 becoming respectively 4.1x, 4.7x and 7x faster. Regarding UNGAR, again FAS alone does not improve resolution time (for Z3, we even see worse performance), but adding interval reasoning yields dramatic improvement: Boolector becomes 18.8x faster, Yices becomes 48.2x faster (with -1 timeout) and Z3 does not time out anymore (-12 timeout).
Table 3: GRUB (interval), 753 formulas — Number of timeout and resolution time (in seconds, without timeout)
<table>
<thead>
<tr>
<th>GRUB</th>
<th>Boolector</th>
<th>Yices</th>
<th>Z3</th>
<th>Boolector</th>
<th>Yices</th>
<th>Z3</th>
</tr>
</thead>
<tbody>
<tr>
<td>default</td>
<td>0</td>
<td>508</td>
<td>0</td>
<td>258</td>
<td>0</td>
<td>31,322</td>
</tr>
<tr>
<td>FAS</td>
<td>0</td>
<td>505</td>
<td>0</td>
<td>257</td>
<td>1</td>
<td>26,809</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>0</td>
<td>123</td>
<td>0</td>
<td>54</td>
<td>0</td>
<td>4,481</td>
</tr>
</tbody>
</table>
Table 4: UNGAR (symbolic), 139 formulas — Number of timeout and resolution time (in seconds, without timeout)
<table>
<thead>
<tr>
<th>UNGAR</th>
<th>Boolector</th>
<th>Yices</th>
<th>Z3</th>
<th>Boolector</th>
<th>Yices</th>
<th>Z3</th>
</tr>
</thead>
<tbody>
<tr>
<td>default</td>
<td>0</td>
<td>359</td>
<td>3</td>
<td>627</td>
<td>12</td>
<td>926</td>
</tr>
<tr>
<td>FAS</td>
<td>0</td>
<td>373</td>
<td>3</td>
<td>624</td>
<td>12</td>
<td>1,130</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>0</td>
<td>19</td>
<td>2</td>
<td>13</td>
<td>0</td>
<td>569</td>
</tr>
</tbody>
</table>
Conclusion. On these middle-size formulas coming from typical SE problems, we can draw the following conclusion: Speed. FAS is extremely efficient and does not yield any noticeable overhead; Thoroughness. Formula simplification is significant — even on fully symbolic formulas, and it becomes (as expected) dramatic on “concrete” formulas; Impact. The impact of FAS varies across solvers and formulas categories, yet it is always positive and it can be dramatic in some settings (low timeout, interval formulas, etc.).
5.4 Very large formulas
We now turn our attention to large formulas (max. 458 MB, avg. 45 MB) involving very long sequences of nested row (max. 510,066 row, avg. 49,850 row), as can be found for example in symbolic deobfuscation. We consider 29 benchmarks taken from a recent paper on the topic [27] representing execution traces over (mostly non crypto-) hash functions (e.g. MD5, City, Fast, Spooky, etc.) obfuscated by the Tigress tool [12]. We also consider a trace taken from the AS Pack packing tool.
Results are presented in Table 5, where FAS-list represents our simplification method where the map list is replaced by a normal list — getting an improved version of the standard list-based row-simplification (the goal being to evaluate the gain of our new data structure). Again, simplification is significant with a strong impact on the number of time outs and on resolution time, especially in the concrete case and for Z3. Impact in the symbolic case is more mixed but positive (-1 timeout for Boolector and Z3, no impact for Yices). In term of size, FAS reduces formulas to max. 86.49MB, avg. 6.98MB, and FAS-itv to max. 86.45MB, avg. 6.17MB. If sub-term sharing is disabled, formulas size jumps to max. 591.99MB, avg. 14.95MB for FAS and max. 591.71MB, avg. 16.35MB for FAS-itv. Regarding simplification time, FAS-list suffers from scalability issues on these formulas (5x slower than FAS).
Table 5: 29 x 3 very large formulas from SE, with \textsc{timeout} = 1,000 sec.: simplification time (in seconds), number of \textsc{row} after simplification, number of \textsc{timeout} and resolution time (in seconds, without \textsc{timeout})
<table>
<thead>
<tr>
<th>simpl. time</th>
<th>#\textsc{timeout} and resolution time</th>
<th>#\textsc{row}</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td>\textsc{concrete}</td>
<td></td>
<td></td>
</tr>
<tr>
<td>default</td>
<td>44</td>
<td>10</td>
</tr>
<tr>
<td>FAS-list</td>
<td>1,108</td>
<td>8</td>
</tr>
<tr>
<td>FAS</td>
<td>196</td>
<td>8</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>210</td>
<td>4</td>
</tr>
<tr>
<td>\textsc{interval}</td>
<td></td>
<td></td>
</tr>
<tr>
<td>default</td>
<td>44</td>
<td>12</td>
</tr>
<tr>
<td>FAS-list</td>
<td>222</td>
<td>12</td>
</tr>
<tr>
<td>FAS</td>
<td>231</td>
<td>12</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>237</td>
<td>12</td>
</tr>
<tr>
<td>\textsc{symbolic}</td>
<td></td>
<td></td>
</tr>
<tr>
<td>default</td>
<td>40</td>
<td>12</td>
</tr>
<tr>
<td>FAS-list</td>
<td>187</td>
<td>11</td>
</tr>
<tr>
<td>FAS</td>
<td>194</td>
<td>11</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>200</td>
<td>11</td>
</tr>
</tbody>
</table>
The \textsc{ASPack} example. We now turn our attention to the formula generated from a trace of a program protected by \textsc{ASPack} (96 MB and 363,594 \textsc{row}, \textsc{concrete} mode). Solving the formula is highly challenging: while \textsc{Yices} succeeds in a decent amount of time (69 seconds), \textsc{Z3} terminates in 2h36min while \textsc{Boolector} needs 24h. Table 6 presents our results on this particular example. \textsc{FAS} performs extremely well (Table 6), turning resolution time from hours to a few seconds (\textsc{Boolector}) or minutes (\textsc{Z3}). \textsc{Yices} also benefits from it. Especially, all \textsc{row} instances are simplified away. \textsc{FAS} and \textsc{FAS-itv} reduce the \textsc{ASPack} formula size to 3.81MB, while it jumps to 443.54MB when sub-term sharing is disabled.
Interestingly, this example clearly highlights the scalability of \textsc{FAS} \wrt a standard list-based approach, passing roughly from 1h (\textsc{list}) to 1 minute (\textsc{fas}). Fig. 10 proposes a detailed view of the performance and impact of the standard list-based simplification method (\textsc{Boolector} only), depending on the bound for backward reasoning (the standard method has no bound). For comparison, the two horizontal lines represent simplification and resolution time with \textsc{FAS}. We can see that bounding the list-based reasoning has no tangible effect here, as we need at least a 3,000 seconds (50 minutes) simplification time to get a resolution time under 3,000 seconds.
Table 6: \textsc{ASPack} formula, without \textsc{timeout}
<table>
<thead>
<tr>
<th>\textsc{ASPack}</th>
<th>simpl. time</th>
<th>\textsc{resolution time}</th>
<th>#\textsc{row}</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td>default</td>
<td>15 sec.</td>
<td>\approx 24h</td>
<td>69 sec.</td>
</tr>
<tr>
<td>FAS-list</td>
<td>53 min.</td>
<td>9.7 sec.</td>
<td>3.4 sec.</td>
</tr>
<tr>
<td>FAS</td>
<td>61 sec.</td>
<td>9.7 sec.</td>
<td>3.4 sec.</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>63 sec.</td>
<td>9.8 sec.</td>
<td>3.4 sec.</td>
</tr>
</tbody>
</table>
Conclusion. Once again FAS appears to be fast and to have a significant impact on resolution time, especially in the concrete case where the difference can be from several hours to a few seconds (total resolution + simplification: a few minutes). Moreover, it appears clearly that on very long traces FAS scales much better than the standard list-based ROW-simplification method.
5.5 SMT-LIB formulas
We consider now the impact of FAS on formulas taken from the SMT-LIB benchmarks. These formulas are notably different from the ones considered in the two previous experiments: while most of them do come from verification problems, they may involve complex Boolean structure (rather than “mostly conjunctive” formulas) and they do not necessarily exhibit very deep chains of ROW. These kinds of formulas are not our primary objective, yet we seek to evaluate how our technique performs on a “bad case”. We evaluate FAS on all the 15,016 SMT-LIB formulas from QF_ABV theory. TIMEOUT is set to 1,000 seconds. Results are reported in Table 7. Note that, again, resolution time does not include TIMEOUT.
Table 7: 15,016 formulas from SMT-LIB benchmarks, with TIMEOUT = 1.000 sec.: simplification time (in seconds), number of ROW after simplification, number of TIMEOUT and resolution time (in seconds, without TIMEOUT)
<table>
<thead>
<tr>
<th>SMT-LIB</th>
<th>simpl. time</th>
<th>#TIMEOUT and resolution time</th>
<th>#ROW</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Boolector</td>
<td>Yices</td>
</tr>
<tr>
<td>default</td>
<td>87</td>
<td>59, 20,126</td>
<td>151</td>
</tr>
<tr>
<td>FAS</td>
<td>378</td>
<td>54, 19,922</td>
<td>148</td>
</tr>
<tr>
<td>FAS-itv</td>
<td>378</td>
<td>55, 19,843</td>
<td>146</td>
</tr>
</tbody>
</table>
Conclusion. FAS is again very efficient on these formulas (avg. 0.025 sec. per formula), and reduces the number of ROW by -14%. Yet the impact of simplifications, while slight, is clearly positive on both TIMEOUT (Boolector -8%, Yices -2% and Z3 -7%) and resolution time (for Yices, only when taking TIMEOUT time into account). Such gains are not anecdotal as the best SMT solvers are highly tuned for SMT-LIB. Since the number of TIMEOUT is the main metric for SMT-LIB, Boolector with FAS would have won the last edition for QF_ABV theory. Finally, domain reasoning does not add anything here (but for Yices) — either the benchmark formulas do not exhibit such interval constraints, or our propagation mechanism is too crude to take advantage of it.
5.6 Conclusion
Our experiments demonstrate that our approach is efficient (the cost is almost always negligible w.r.t. resolution time) and scalable (compared to the list-based method). The simplification is thorough, removing a large fraction of ROW. The impact is always positive (both in resolution time and number of time outs), and it is dramatic for some key usage scenarios such as SE-like formulas with small TIMEOUT or very large size.
Finally, we can note that domain reasoning is usually helpful (though, not on SMT-LIB formulas) and that it shows a powerful synergy with the “interval C/S policy” in SE — yielding a new interesting sweet spot between tractability and genericity of reasoning.
6 Related work
A preliminary work in progress version of this work was published in a French workshop [20], in French (6 pages). The current article adds a much more refined description, the domain reasoning part and a much more systematic and thorough experimental evaluation (including SMT-LIB, long traces over packed hash functions, etc.).
Decision procedures for the theory of arrays. Surprisingly, there have been relatively few works on the efficient handling of the (basic) theory of arrays. Standard symbolic approaches for pure arrays complement symbolic read-over-write preprocessing [22, 5, 2] with enumeration on (dis-)equalities, yielding a potentially huge search space. New array lemmas can be added on-demand or incrementally discovered through an abstraction-refinement scheme [9]. Another possibility is to reduce the theory of arrays to the theory of equality by systematic “inlining” of the array axioms to remove all store operators, at the price of introducing many case-splits. The encoding can be eager [24] or lazy [9]. Our method generalizes previous preprocessing [22, 2] and is complementary to complete resolution methods [9, 24]. Note also that our approach could benefit from being integrated directly within such a complete resolution method, allowing incremental simplification all along the resolution process.
Decision procedures have also been developed for expressive extensions of the array theory, such as arrays with extensionality (i.e. equality over whole arrays) or the array property fragment [7], which enables limited forms of quantification over indexes and arithmetic constraints. These extensions aim at increasing expressiveness and they do not focus so much on practical efficiency. Our method can also be applied to these settings (as row are still a crucial issue), even though it will not cover all difficulties of these extensions.
Optimized handling of arrays inside tools. Many verification and program analysis tools and techniques ultimately rely on solving logical formulas involving the theory of arrays. Since the common practice is to re-use existing (SMT) solvers, these approaches suffer from the limitations of the current solvers over arrays. As a mitigation, some of these tools take into account knowledge from the application domain in order to generate relevant (but usually not equivalent) and simpler formulas [25, 21] — see also the specific case of SE over concrete indexes discussed in Sec. 3. Our method is complementary to these approaches as it operates on arbitrary formulas and the simplification keeps logical equivalence.
7 Conclusion
The theory of arrays has a central place in software verification due to its ability to model memory or data structures. Yet, this theory is known to be hard to solve because of read-over-write terms (row), especially in the case of very large formulas coming from unrolling-based verification methods. We have presented FAS, an original simplification method for the theory of arrays geared at eliminating row, based on a new dedicated data structure together with original simplifications and low-cost reasoning. The technique is efficient, scalable and it yields significant simplification. The impact on formula resolution is always positive, and it can be dramatic on some specific classes of problems of interest, e.g. very long formula or binary-level symbolic execution. These advantages have been experimentally proven both on realistic formulas coming from symbolic execution and on SMT-LIB formulas.
Future work includes a deeper integration inside a dedicated array solver in order to benefit from more simplification opportunities along the resolution process, as well as exploring the interest of adding more expressive domain reasoning.
References
|
{"Source-Url": "https://easychair.org/publications/download/lSLN", "len_cl100k_base": 14649, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 63181, "total-output-tokens": 18655, "length": "2e13", "weborganizer": {"__label__adult": 0.00032901763916015625, "__label__art_design": 0.0003814697265625, "__label__crime_law": 0.0004220008850097656, "__label__education_jobs": 0.0004732608795166016, "__label__entertainment": 7.289648056030273e-05, "__label__fashion_beauty": 0.00014984607696533203, "__label__finance_business": 0.00022935867309570312, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.00074005126953125, "__label__hardware": 0.0009775161743164062, "__label__health": 0.0003895759582519531, "__label__history": 0.00026154518127441406, "__label__home_hobbies": 8.83340835571289e-05, "__label__industrial": 0.0003573894500732422, "__label__literature": 0.00025391578674316406, "__label__politics": 0.00026345252990722656, "__label__religion": 0.00040340423583984375, "__label__science_tech": 0.0401611328125, "__label__social_life": 8.058547973632812e-05, "__label__software": 0.01010894775390625, "__label__software_dev": 0.94287109375, "__label__sports_fitness": 0.0002313852310180664, "__label__transportation": 0.0004122257232666016, "__label__travel": 0.00018286705017089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62494, 0.06423]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62494, 0.30127]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62494, 0.84759]], "google_gemma-3-12b-it_contains_pii": [[0, 2616, false], [2616, 5964, null], [5964, 8937, null], [8937, 12372, null], [12372, 15764, null], [15764, 18721, null], [18721, 22271, null], [22271, 26266, null], [26266, 31298, null], [31298, 34406, null], [34406, 38254, null], [38254, 42155, null], [42155, 45780, null], [45780, 49192, null], [49192, 52460, null], [52460, 56238, null], [56238, 60065, null], [60065, 62494, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2616, true], [2616, 5964, null], [5964, 8937, null], [8937, 12372, null], [12372, 15764, null], [15764, 18721, null], [18721, 22271, null], [22271, 26266, null], [26266, 31298, null], [31298, 34406, null], [34406, 38254, null], [38254, 42155, null], [42155, 45780, null], [45780, 49192, null], [49192, 52460, null], [52460, 56238, null], [56238, 60065, null], [60065, 62494, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62494, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62494, null]], "pdf_page_numbers": [[0, 2616, 1], [2616, 5964, 2], [5964, 8937, 3], [8937, 12372, 4], [12372, 15764, 5], [15764, 18721, 6], [18721, 22271, 7], [22271, 26266, 8], [26266, 31298, 9], [31298, 34406, 10], [34406, 38254, 11], [38254, 42155, 12], [42155, 45780, 13], [45780, 49192, 14], [49192, 52460, 15], [52460, 56238, 16], [56238, 60065, 17], [60065, 62494, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62494, 0.2259]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
5f7c9424d3112d5c5100d9c8d13470a3b4a70ca0
|
[REMOVED]
|
{"len_cl100k_base": 8975, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 32795, "total-output-tokens": 10601, "length": "2e13", "weborganizer": {"__label__adult": 0.00036072731018066406, "__label__art_design": 0.0004649162292480469, "__label__crime_law": 0.0003819465637207031, "__label__education_jobs": 0.0012731552124023438, "__label__entertainment": 0.00012922286987304688, "__label__fashion_beauty": 0.00019884109497070312, "__label__finance_business": 0.0006270408630371094, "__label__food_dining": 0.0004558563232421875, "__label__games": 0.0007414817810058594, "__label__hardware": 0.0032596588134765625, "__label__health": 0.0008263587951660156, "__label__history": 0.0004472732543945313, "__label__home_hobbies": 0.0001634359359741211, "__label__industrial": 0.0010004043579101562, "__label__literature": 0.00036215782165527344, "__label__politics": 0.00031375885009765625, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.37451171875, "__label__social_life": 0.00010395050048828124, "__label__software": 0.01351165771484375, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.00033283233642578125, "__label__transportation": 0.0012578964233398438, "__label__travel": 0.00027680397033691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33491, 0.03529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33491, 0.69432]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33491, 0.84042]], "google_gemma-3-12b-it_contains_pii": [[0, 1211, false], [1211, 3972, null], [3972, 6226, null], [6226, 8681, null], [8681, 10434, null], [10434, 12578, null], [12578, 14859, null], [14859, 17199, null], [17199, 18657, null], [18657, 20221, null], [20221, 22267, null], [22267, 24973, null], [24973, 26686, null], [26686, 28276, null], [28276, 29761, null], [29761, 32484, null], [32484, 33491, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1211, true], [1211, 3972, null], [3972, 6226, null], [6226, 8681, null], [8681, 10434, null], [10434, 12578, null], [12578, 14859, null], [14859, 17199, null], [17199, 18657, null], [18657, 20221, null], [20221, 22267, null], [22267, 24973, null], [24973, 26686, null], [26686, 28276, null], [28276, 29761, null], [29761, 32484, null], [32484, 33491, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33491, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33491, null]], "pdf_page_numbers": [[0, 1211, 1], [1211, 3972, 2], [3972, 6226, 3], [6226, 8681, 4], [8681, 10434, 5], [10434, 12578, 6], [12578, 14859, 7], [14859, 17199, 8], [17199, 18657, 9], [18657, 20221, 10], [20221, 22267, 11], [22267, 24973, 12], [24973, 26686, 13], [26686, 28276, 14], [28276, 29761, 15], [29761, 32484, 16], [32484, 33491, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33491, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
b03c44fca5e3f04d75ee32ec46a6172dadb09e9e
|
April 2018
Sight.js: Data Analysis in Splunk
Alexander Dyer
Worcester Polytechnic Institute
Follow this and additional works at: https://digitalcommons.wpi.edu/mqp-all
Repository Citation
This Unrestricted is brought to you for free and open access by the Major Qualifying Projects at Digital WPI. It has been accepted for inclusion in Major Qualifying Projects (All Years) by an authorized administrator of Digital WPI. For more information, please contact digitalwpi@wpi.edu.
Sight.js: Data Analysis in Splunk
Major Qualifying Project
Advisor:
LANE HARRISON
Written By:
ALEXANDER DYER
A Major Qualifying Project
WORCESTER POLYTECHNIC INSTITUTE
Submitted to the Faculty of the Worcester Polytechnic Institute in partial fulfillment of the requirements for the Degree of Bachelor of Science in Computer Science.
OCTOBER 24TH, 2017 - APRIL 26TH, 2018
Abstract
A comparison of tools, which track user interaction with web visualizations, and an assertion of the necessities for gathering meaningful insights from user interactions with web visualizations.
# Table of Contents
<table>
<thead>
<tr>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>List of Tables</td>
</tr>
<tr>
<td>List of Figures</td>
</tr>
</tbody>
</table>
1 Introduction
1.1 User Interaction in the Information Age | 1 |
1.2 Examination of the Design Space | 1 |
1.3 The Data Hierarchy of Visualizations on the Web | 2 |
1.4 Closing the Feedback Loop in Web Visualization | 3 |
2 Background
2.1 Understanding User Interactions through Interaction Logs | 5 |
2.2 Remotely Monitoring User Interaction | 5 |
2.3 Applications of the Union of Remote Monitoring and Usability Logs | 6 |
3 Methodology
3.1 Study Objectives | 7 |
3.2 Instrumentation Evaluation Methodology | 7 |
3.3 Criteria for Evaluating Instrumentation Tools for Interactive Data Visualization | 9 |
3.4 Visualizing User Interaction with Sight.js | 10 |
3.4.1 Applying Sight.js to an Existing Web Visualization | 10 |
3.4.2 Insights into User Interaction in Splunk | 10 |
4 Comparing Instrumentation Tools for Interactive Data Visualizations | 12 |
4.1 Evaluation Strategy in Brief | 12 |
4.2 Google Analytics | 13 |
4.2.1 Effort | 13 |
4.2.2 Versatility | 13 |
4.2.3 Data | 14 |
4.2.4 Performance | 15 |
4.2.5 Visualization | 15 |
TABLE OF CONTENTS
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.2.6 Price</td>
<td>16</td>
</tr>
<tr>
<td>4.3 Session Stack</td>
<td>17</td>
</tr>
<tr>
<td>4.3.1 Effort</td>
<td>17</td>
</tr>
<tr>
<td>4.3.2 Versatility</td>
<td>18</td>
</tr>
<tr>
<td>4.3.3 Data</td>
<td>18</td>
</tr>
<tr>
<td>4.3.4 Performance</td>
<td>18</td>
</tr>
<tr>
<td>4.3.5 Visualization</td>
<td>18</td>
</tr>
<tr>
<td>4.3.6 Interface</td>
<td>18</td>
</tr>
<tr>
<td>4.3.7 Price</td>
<td>18</td>
</tr>
<tr>
<td>4.4 Sight.js</td>
<td>19</td>
</tr>
<tr>
<td>4.4.1 Effort</td>
<td>19</td>
</tr>
<tr>
<td>4.4.2 Versatility</td>
<td>20</td>
</tr>
<tr>
<td>4.4.3 Data</td>
<td>20</td>
</tr>
<tr>
<td>4.4.4 Performance</td>
<td>20</td>
</tr>
<tr>
<td>4.4.5 Visualization</td>
<td>20</td>
</tr>
<tr>
<td>4.4.6 Interface</td>
<td>21</td>
</tr>
<tr>
<td>4.4.7 Price</td>
<td>21</td>
</tr>
<tr>
<td>4.5 Mixpanel</td>
<td>21</td>
</tr>
<tr>
<td>4.5.1 Effort</td>
<td>21</td>
</tr>
<tr>
<td>4.5.2 Versatility</td>
<td>22</td>
</tr>
<tr>
<td>4.5.3 Data</td>
<td>22</td>
</tr>
<tr>
<td>4.5.4 Performance</td>
<td>22</td>
</tr>
<tr>
<td>4.5.5 Visualization</td>
<td>22</td>
</tr>
<tr>
<td>4.5.6 Interface</td>
<td>22</td>
</tr>
<tr>
<td>4.5.7 Price</td>
<td>22</td>
</tr>
<tr>
<td>4.6 Numerical Evaluation of All Tools</td>
<td>23</td>
</tr>
<tr>
<td>5 Applying Sight.js to Gain Insight into User Interaction</td>
<td>26</td>
</tr>
<tr>
<td>5.1 Visualizing User Interaction Logs with Splunk</td>
<td>26</td>
</tr>
<tr>
<td>5.2 Applications of Visualized User Interaction Logs</td>
<td>29</td>
</tr>
<tr>
<td>6 Discussion</td>
<td>31</td>
</tr>
<tr>
<td>6.1 Patterns Across Solutions</td>
<td>31</td>
</tr>
<tr>
<td>6.2 Insights from Exploring the Design Space</td>
<td>32</td>
</tr>
<tr>
<td>6.3 Benefits of a Visualization Specific Tool</td>
<td>32</td>
</tr>
<tr>
<td>7 Conclusions and Recommendations</td>
<td>34</td>
</tr>
<tr>
<td>7.1 Impact</td>
<td>34</td>
</tr>
<tr>
<td>7.2 Conclusion</td>
<td>34</td>
</tr>
</tbody>
</table>
iii
| Appendix A: Splunk Visualizations from Sight.js Data | i |
| Bibliography | iii |
LIST OF TABLES
<table>
<thead>
<tr>
<th>TABLE</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.1</td>
<td>Sample Google Analytics Export.</td>
</tr>
<tr>
<td>4.2</td>
<td>Numerical Evaluation of Metrics</td>
</tr>
<tr>
<td>Figure</td>
<td>Page</td>
</tr>
<tr>
<td>--------</td>
<td>------</td>
</tr>
<tr>
<td>1.1</td>
<td>2</td>
</tr>
<tr>
<td>3.1</td>
<td>8</td>
</tr>
<tr>
<td>3.2</td>
<td>11</td>
</tr>
<tr>
<td>4.1</td>
<td>12</td>
</tr>
<tr>
<td>4.2</td>
<td>13</td>
</tr>
<tr>
<td>4.3</td>
<td>13</td>
</tr>
<tr>
<td>4.4</td>
<td>15</td>
</tr>
<tr>
<td>4.5</td>
<td>15</td>
</tr>
<tr>
<td>4.6</td>
<td>16</td>
</tr>
<tr>
<td>4.7</td>
<td>16</td>
</tr>
<tr>
<td>4.8</td>
<td>17</td>
</tr>
<tr>
<td>4.9</td>
<td>17</td>
</tr>
<tr>
<td>4.10</td>
<td>17</td>
</tr>
<tr>
<td>4.11</td>
<td>19</td>
</tr>
<tr>
<td>4.12</td>
<td>19</td>
</tr>
<tr>
<td>4.13</td>
<td>20</td>
</tr>
<tr>
<td>4.14</td>
<td>21</td>
</tr>
<tr>
<td>4.15</td>
<td>21</td>
</tr>
<tr>
<td>4.16</td>
<td>22</td>
</tr>
<tr>
<td>4.17</td>
<td>23</td>
</tr>
<tr>
<td>4.18</td>
<td>23</td>
</tr>
<tr>
<td>4.19</td>
<td>24</td>
</tr>
<tr>
<td>4.20</td>
<td>24</td>
</tr>
<tr>
<td>5.1</td>
<td>26</td>
</tr>
<tr>
<td>5.2</td>
<td>27</td>
</tr>
<tr>
<td>5.3</td>
<td>27</td>
</tr>
<tr>
<td>5.4</td>
<td>28</td>
</tr>
</tbody>
</table>
5.5 Numerically represented counts of mouse events and the average absolute velocity of a user’s session. .................................................. 28
5.6 A breakdown of mouse actions and page events broken down by user sessions. .... 28
5.7 The top ten most visited planets. ................................................. 29
5.8 Exoplanet radius and count of DOM Events. .................................... 29
1 Manhattan distance dissimilarity function plotted against count of DOM events. . . i
2 A multiseries timechart of user interactions with the exoplanets visualization. Note that the use of multiple series aids readability when compared to figure 5.3. .......... i
3 Tracking of mouse movements and categorization by event type. This became the more robust implementation of figure 5.4. ........................................ ii
4 Pie charts of DOM events occurring on individual planets. .......................... ii
1.1 User Interaction in the Information Age
In the advent of the information age, the amount of trackable user information is increasing past previously known limits. Large scale data aggregation and analysis is becoming a very real, and sometimes essential, field of endeavour. Online services such as search engines, online retailers, and social media sites use the interactions of their users to tailor the experience on the platform. This application of the user feedback inherent in the user’s interaction is invaluable. Despite the widespread knowledge of this, many news websites and other companies which make visualizations on the web are not monitoring users’ interaction with web visualizations.
1.2 Examination of the Design Space
There are a few tools in the current design space which can potentially fill the role of combining remote user interaction logs and systemic evaluation. The tools Google Analytics, Session Stack, and Mixpanel are primarily centered around tracking websites in general, not visualizations in particular, but have the ability to track user interaction [4–6].
Google Analytics and Mixpanel seek to understand user engagement with a webpage and determining user trends. Many applications of these tools center around which demographics engage with specific portions of the webpage. For example, a company might employ Google Analytics to determine how many users are interacting with a video on the page. For monetization purposes or to determine what a user interacts with, these tools are very helpful. The difficulty arises when lower level, more descriptive data is desired.
Session Stack was designed with a different philosophy, yet can accomplish similar goals. By recording user sessions on a webpage, allowing playback, and displaying a debug log, Session
Stack allows for viewing of the user’s actual session, complete with generated errors or info level data. Rather than relying on an amalgamation of user logs, Session Stack shows exactly what the user is doing. This application is well suited to the task of debugging webpages, seeing what went wrong and where.
These tools will be compared to the visualization specific Javascript library, Sight.js, and the efficacy of each tool to track user interaction will be evaluated. As a part of understanding the usefulness of considering user interaction logs to close the feedback loop, visualization of user interactions will be examined.
### 1.3 The Data Hierarchy of Visualizations on the Web
When looking at user interaction with a web visualization there are a number of data facets to consider. Functionally, the tracked data can be broken down into three distinct sections. This hierarchical structure of web visualization data is illustrated in figure 1.1.

**Figure 1.1:** The data hierarchy of web visualization.
The top layer is the data associated with the Document Object Model (DOM). This layer encompasses events like mouse movement and clicking. Basic interaction with a web visualization is captured here. It is possible to see what users are doing in this layer, but motives are largely indiscernable.
1.4 CLOSING THE FEEDBACK LOOP IN WEB VISUALIZATION
The middle layer holds the data associated with the Scalable Vector Graphic (SVG) elements. These elements are the shapes which make up web visualizations. The type of data points captured in this layer are the exact coordinates of an element on the page, the radius or length of the element, and the actual shape of the element. At this level of data, a more holistic picture of the user’s interaction is drawn. Limited understanding of user motives can be achieved at this point, correlating user interaction and shape.
The deepest layer contains the data bound to the SVG elements. In this layer, the SVG elements are no longer just shapes, but a representation of the underlying data. The addition of this layer fills out the flaws in the other two to form a gestalt. The user’s interaction with the web visualization reaches its clearest point. In the deepest layer, user interactivity is placed in complete context. This allows for educated inferences on user motives, tying together the visual cues of the SVG elements and the intellectual cues of the underlying data.
1.4 Closing the Feedback Loop in Web Visualization
In order to evaluate the existing market solutions for tracking user interaction on the web, a set of criteria had to be developed. Each tool was ranked based upon the difficulty of its implementation, the flexibility of the data gathered, the level of data collected from the data hierarchy of web visualizations, the celerity with which the tool was able to complete its task, the robustness of any built-in visualization or analysis tools, the complexity of the interface, and the cost of the tool. These criteria were assessed through a standardized trial with the same visualization.
The two most important criteria were found to be the versatility of the tool and level of data gathered. Of all the metrics discussed, these two contribute the most to the task of understanding user interactions with web visualizations. Without the entire picture of DOM, SVG, and library level data, it is more likely that experts would misinterpret user motives, as they interact with visualizations. Furthermore, if the entirety of the collected data is locked within the tool used to capture it, the data can become effectively useless. Full analysis of user interaction falls outside the capabilities of any of the data collection tools examined. In order to achieve a full picture from the data, it was found that outside tools are required to form more robust visualizations and analyses.
In addition to evaluating existing tools, this project explored the impact of understanding user interactions with visualizations using the Sight.js library. On the same visualization used to evaluate existing solutions, a user was asked to interact for a brief period. The resulting interaction logs were explored, analysed, and visualized to show the potential information made possible by closing the feedback loop with online web visualizations.
Closing the feedback loop between visualization creator and consumer has broad reaching implications. Web visualizations can be designed to maximize user engagement. If there are portions of a visualization that are never interacted with or interaction features that are never
used, creators can take this into account resulting in interfaces explaining features better or the trimming of unnecessary features. Functionally, every visualization on the web could have a usability study performed with limited effort on the creator’s part. The potential improvement in visualizations as a whole is almost immeasurable.
2.1 Understanding User Interactions through Interaction Logs
Soliciting feedback from users and conducting studies to understand more fully how they interact with interfaces has been a staple of human-computer interaction studies since the advent of the field. A more recent development is trying to understand the user’s interactions from logs of their activity. Insights that can be gained have the potential to explain user behavior [7] and possibly some user characteristics [8, 9]. Analysis done in prior studies show user interests and methods of exploration can be gleaned, though not without some challenges [10, 11]. Even users themselves have found usefulness in viewing their interaction past [12]. While gaining understanding of single users is interesting, the significance of interpreting user interaction logs comes from the generalizations of large studies. The ability to understand, and possibly predict, user interaction patterns adds an additional layer of insight into how humans operate on the web [13–15]. Knowledge and understanding of cognitive patterns on the scale of the number of internet users is indispensable.
2.2 Remotely Monitoring User Interaction
Remote usability testing has been examined along with its potential effectiveness compared to conventional methods [16]. The most effective usability studies have traditionally been ones where researchers remain with the participants. The trade off in significantly lowering the barrier to entry in usability studies with users is a slight reduction in accuracy [16]. It is a natural progression to combine the economies of scale afforded by remote usability testing and user interaction logs, as millions of people use websites and interact with the robust visualizations found there.
Many current methods of monitoring user interactions over the web have been evaluated as inconsistent or incomplete [17]. There is much to understand about how users interact with website content, and standard web server logging is not sufficient to capture the nuances necessary for usability testing [18]. Even simple monitoring of a user's mouse has been shown to uncover actionable areas of improvement [19]. Remote, cohesive monitoring of user interactions has the potential to usher in large improvements in the field of data visualization.
### 2.3 Applications of the Union of Remote Monitoring and Usability Logs
The addition of user interaction logs to novel visualization techniques can improve the process of academic studies. Being able to essentially automate feedback and usability studies goes a long way in evaluating the utility of new tools, in contrast to sending out hundreds of surveys [20]. However, the potential benefits are not limited to the academic sphere. The potential utility of the application of user interaction logs and remote usability testing can be a great boon to enterprise ventures [21, 22] and online news organizations alike [23, 24]. The basic supposition behind all of this research is that information about user interactions are not only useful, but valuable, and its application should be both widespread and utilized.
3.1 Study Objectives
The principle purpose of this research is to determine the efficacy of existing tools at monitoring users interaction with web visualizations. In some respects, this project serves as a proof of concept that such interactions are both trackable and meaningful. The exploration of these tools aims to illuminate the design space, showing where improvements can be made and what features are necessities.
The analysis of user interaction logs performed in this project seek to highlight the importance of a holistic view of user interaction. Honing a visualization based around user feedback is not a new concept, however, the mere fact that feedback can be gleaned from user interaction logs cannot be understated. This research aims to demonstrate that expensive studies and time consuming in-person interviews are not necessary to close the feedback loop. Rather, insights that can be gained from user interaction logs can be effective in providing actionable feedback.
3.2 Instrumentation Evaluation Methodology
The study needed to approach the problem from the base level. The visualization being used as a testbed needed nontrivial data associated with it, but also needed to be simple enough so as to not confuse the user with added features. The data of interest for this study was the captured data from users interacting with a visualization, not exploring dropdown menus, filters, or other common features in visualizations. Additionally, to implement each of the tools evaluated in this study, access to the source code to add user interaction tracking was necessary, which contributed to the choice in visualization.
Each tracking tool was implemented on Lane Harrison’s exoplanets visualization shown in figure 3.1. The visualization was created using D3.js, and is hosted on an HTML webpage. The webpage contained exclusively this visualization to reduce excess visual clutter. This environment mimics the key components of an online web visualization, and the visualization itself holds non-trivial data.
The circles shown in the visualization represent exoplanets. The circles as SVG elements are described by their radius, horizontal position in the SVG, and vertical position in the SVG. Bound to these circles is exoplanet data. Each exoplanet is defined by the exoplanet’s radius, name, atmosphere, distance to the nearest star, and the year it was discovered. When a circle is
3.3. CRITERIA FOR EVALUATING INSTRUMENTATION TOOLS FOR INTERACTIVE DATA VISUALIZATION
hovered over by the mouse, the corresponding exoplanet data appears at the top left corner.
Sessions for evaluation ranged between twenty and thirty seconds in length. Throughout these sessions, multiple exoplanets were interacted with by mouse movement and mouse clicking. The tracking tools were then evaluated based on seven metrics.
3.3 Criteria for Evaluating Instrumentation Tools for Interactive Data Visualization
The following criteria give insight into the usefulness of the tool, how and when it should be applied, as well as its strengths and weaknesses. The task of each tool was to convey as much user information as possible from a twenty to thirty second session of interaction with the exoplanets visualization shown in figure 3.1. The garnered data was evaluated as well. Criteria are rated on a scale from one to ten, with higher numbers corresponding to a better performance from the tool.
**Effort.** The effort associated with a particular tool corresponds to the difficulty in setting up a working implementation. Every tool requires access to the source code of the webpage, and as a result, some level of domain specific knowledge. This measure is therefore more comparative than some of the other criteria.
**Versatility.** This measure takes into account how accessible and usable the data becomes once gathered by the tool. Methods of storing data, whether the data is serializable, and the difficulty incorporated in transferring the data for use in other applications contribute to this score.
**Data.** The data criteria is evaluated based on the hierarchy of data in a web visualization. The scope of this project is only concerned with how users interact with domain elements, SVG elements, and the underlying data. Other data points such as user demographics, locations, or referrals fall outside the aim of this project and are excluded from this evaluation.
**Performance.** Performance in a tool is an integral part of its daily use. Here, performance is associated with the amount of time taken to process large data, the amount of overhead innate in the tool, and the delay from data generation by the user to being able to access the data. It is worth clarifying that a higher value for performance means better performance, lower delays, and less overhead.
**Visualization.** Many of the tools have built-in visualization creation tools, the robustness of which influences this metric. While useful, the visualization capabilities of these tools pale in comparison to dedicated visualization libraries and software. As a result, this metric becomes more important in tools with lower versatility and less essential in tools that integrate well with libraries and applications.
**Interface.** An interface has the capability to make a tool significantly more user friendly. Considerations in this category are number of options available to the user, the complexity and accompanying visual noise, and how well the interface aids users in their task. The goal of this
metric is to identify if the interface needlessly adds complexity and is detrimental to the tool's use.
**Price.** As many of the solutions being compared are created by companies, price becomes a fairly important factor. The level of features provided by the free version of the application, as well as how costly the tool can actually become, should be kept in mind.
### 3.4 Visualizing User Interaction with Sight.js
While the different interaction tracking tools were evaluated, a deep dive was made into the information that user interaction logs on a web visualization could give. The data was gathered from simply recording a brief session of a user interacting with the exoplanets visualization. To give the greatest amount of control over the data, Sight.js was used to track the user's interactions.
#### 3.4.1 Applying Sight.js to an Existing Web Visualization
A handful of user sessions were recorded, and the implementation of Sight.js became iterative. Each time user interaction logs were generated, new ideas to improve the gathered data occurred. Implementation of session identification numbers, timestamps, mouse position, and mouse velocity were added after each successive session. Each data field added a new layer of analysis that could be performed. Each session exported a .json file containing up to one thousand individual logs for sessions less than a minute long. Some of these logs were events triggered on the webpage's body or the moment the webpage was loaded. These logs were largely ignored, not due to lack of usefulness, but rather to make room for the focus of the analysis, the exoplanets.
#### 3.4.2 Insights into User Interaction in Splunk
Splunk is a log aggregation and visualization tool which uses a query language to search, transform, and visualize data. For this study, the .json files from Sight.js were uploaded into Splunk and subsequently visualized. A sample of the Splunk interface is shown in figure 3.2.
Some of the fields added into Sight.js proved less valuable than initially thought. While session identification numbers are useful, this study did not have a number of user sessions that necessitated identification in that manner and as such they added minimal value in this specific instance. Additionally, the implementation of mouse velocity did not end up aiding significantly in meaningful analysis.
A number of the fields did prove to provide interesting insights into the user's session. The timestamp allowed for a timeline of the user's session to be created, showing actions in the order they occurred. Through some deduction, the amount of time a user spent on a single element could be found, a potential indicator of an interesting data point. The addition of tracking mouse position when DOM events fired allowed for a trace of the user's path through the visualization.
The visualizations created in Splunk evolved as more interactions were logged. Some visualizations leaned toward numerical analyses such as averages, counts of occurrences, and maximum values. Others looked for a way to mesh what the user may have been thinking with the reality of the interaction logs.
4.1 Evaluation Strategy in Brief
Each of the following tools for tracking user interaction with web visualizations were implemented using the methodology laid out in the previous chapter. The tools were evaluated based on their performance with a twenty to thirty second interaction session on the exoplanets visualization. The principle criteria judged were the difficulty in setting up the tool, the flexibility of the data from the tool, the level of data reporting, the celerity of the tool in performing the task, the built-in visualization tools, and the price.
Figure 4.1: Google Analytics’ Logo [1].
4.2 Google Analytics
Google Analytics is a web analytics tool provided by Google as a means for customers to track user events and demographics. The primary aim of this tool is to track how users interact with the webpage as a whole and how that interaction relates to other pages.
4.2.1 Effort
Google Analytics requires the insertion of two Javascript snippets into the head of the webpage to be tracked. One snippet provides identification for Google Analytics to find the webpage. The other provides event sending functionality. Google Analytics can be instantiaed using only the identification code, as general information, such as number of concurrent users and their location, can still be tracked. Both of these snippets are displayed in figure 4.2.
```javascript
<script src="https://www.googletagmanager.com/gtag/js?id=UA-123456789-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments)};
gtag("js", new Date());
gtag("config", "ACCOUNT NUMBER");
</script>
```
**Figure 4.2:** The Google Analytics code present in the webpage head.
In order to track events in a visualization, each SVG element requires an event listener. In the event listener, a function must be called which sends a Google Analytics object to the Google Analytics instance as shown in figure 4.3. This functionality is not available unless both Javascript snippets are present in the head of the webpage.
```javascript
(function (a, b, c, d, e) {
('google-analytics-object' in b || !b) ? function () {
e.getElementsByTagName('script')[0].async = !a;
e.src = 'https://www.google-analytics.com/analytics.js';
}() : !e.createElement('a') ? function (a, b, c, d, e) {
b.createElement('script').src = 'https://www.google-analytics.com/analytics.js';
b.getElementsByTagName('script')[0].async = !a;
}()
}(true, false, false, false, false));
```
**Figure 4.3:** The Google Analytics code to send a Google Analytic object.
4.2.2 Versatility
Google Analytics can export data into .pdf, .csv, .xlsx, and Google Sheets format. When exported, the data is serializable, which enables the bulk viewing of many data points. However, the data
is aggregated based on the data fields or time, which can make some avenues of analysis either difficult or impossible to pursue.
Output from the exoplanet visualization events are shown in table 4.1. Notice that the export does not denote types of events or labels, though that data is available for analysis using Google Analytics' built-in visualizations.
Table 4.1: Sample Google Analytics Export.
<table>
<thead>
<tr>
<th>Hour Index</th>
<th>Total Events</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>7</td>
<td>0</td>
</tr>
<tr>
<td>8</td>
<td>131</td>
</tr>
<tr>
<td>9</td>
<td>0</td>
</tr>
<tr>
<td>10</td>
<td>0</td>
</tr>
<tr>
<td>11</td>
<td>0</td>
</tr>
<tr>
<td>13</td>
<td>0</td>
</tr>
<tr>
<td>14</td>
<td>0</td>
</tr>
<tr>
<td>15</td>
<td>0</td>
</tr>
<tr>
<td>16</td>
<td>0</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
</tr>
<tr>
<td>18</td>
<td>0</td>
</tr>
<tr>
<td>19</td>
<td>0</td>
</tr>
<tr>
<td>20</td>
<td>0</td>
</tr>
<tr>
<td>21</td>
<td>0</td>
</tr>
<tr>
<td>22</td>
<td>0</td>
</tr>
<tr>
<td>23</td>
<td>0</td>
</tr>
</tbody>
</table>
4.2.3 Data
With respect to the data hierarchy of web visualization, Google Analytics captures the top layer, but merely brushes the other two. DOM events can be captured, but only a small portion of SVG data or the underlying data can be seen. This is the result of the Google Analytics object used to send the data. With a limited number of fields and labels, not all of the desired data can be sent in one event.
Samples of data gathered from the exoplanets visualization are shown in figures 4.4 and 4.5. In this instance, the event label is the name of the planet. Due to the nature of the Google Analytics object, only one field of either SVG data or underlying data can effectively be sent.
4.2. GOOGLE ANALYTICS
4.2.4 Performance
As Google Analytics was built with measuring large amounts of traffic in mind, it handles large influxes of events well and loading times are generally short. There is typically a delay between events firing on the webpage and the events appearing in Google Analytics, but other tracking features do appear in real-time.
4.2.5 Visualization
Google Analytics has a built-in visualization tool, though its capabilities are limited. Event data can be visualized with line charts, bar charts, pie charts, and pivot tables. Sample visualizations from the Google Analytics tool are shown in figure 4.6.
4.2.5.1 Interface
The Google Analytics interface is shown in figure 4.7. Many options are provided to the user, making the view visually busy at times.
<table>
<thead>
<tr>
<th>Event Label</th>
<th>Total Events</th>
<th>% Total Events</th>
</tr>
</thead>
<tbody>
<tr>
<td>WASP-19 b</td>
<td>23</td>
<td>17.56%</td>
</tr>
<tr>
<td>WASP-66 b</td>
<td>14</td>
<td>10.69%</td>
</tr>
<tr>
<td>Kepler-13 b</td>
<td>9</td>
<td>6.87%</td>
</tr>
<tr>
<td>WASP-78 b</td>
<td>7</td>
<td>5.34%</td>
</tr>
<tr>
<td>Kepler-7 b</td>
<td>6</td>
<td>4.58%</td>
</tr>
<tr>
<td>POT9-1 b</td>
<td>5</td>
<td>3.82%</td>
</tr>
<tr>
<td>XO-5 b</td>
<td>5</td>
<td>3.82%</td>
</tr>
<tr>
<td>GJ 3021 b</td>
<td>4</td>
<td>3.05%</td>
</tr>
<tr>
<td>HD 60532 c</td>
<td>4</td>
<td>3.05%</td>
</tr>
<tr>
<td>WASP-82 b</td>
<td>4</td>
<td>3.05%</td>
</tr>
</tbody>
</table>
Figure 4.4: The Google Analytics event label table.
<table>
<thead>
<tr>
<th>Event Action</th>
<th>Total Events</th>
<th>% Total Events</th>
</tr>
</thead>
<tbody>
<tr>
<td>mouseMove</td>
<td>76</td>
<td>58.02%</td>
</tr>
<tr>
<td>mouseOut</td>
<td>25</td>
<td>19.08%</td>
</tr>
<tr>
<td>mouseOver</td>
<td>25</td>
<td>19.08%</td>
</tr>
<tr>
<td>mouseClick</td>
<td>5</td>
<td>3.92%</td>
</tr>
</tbody>
</table>
Figure 4.5: The Google Analytics action table.
4.2.6 Price
Google Analytics has a free version which was used for this project. The paid version primarily provides increased integration with other applications to monitor monetization of the webpage and increased data storage. The price for Google Analytics 360 is $150,000 USD a year [25].
4.3 Session Stack
Session Stack is a web debugging tool that records user sessions to provide insight into how users interact with a webpage by seeing the session played out in full.
4.3.1 Effort
Session Stack requires a Javascript snippet in the head of the webpage in order to track and record the user’s session, seen in figure 4.9. In addition to this code snippet, to gain detailed data on the events occurring on the visualization, each SVG element must have an event listener which sends a log containing data, seen in figure 4.10. These are the only two additions to the code required to implement Session Stack.
```javascript
// Start Session Stack code
// Function to append Session Stack script
function(sessionStack) {
(function(){
var c=document.createElement('script');
c.setAttribute('data-stack', sessionStack);
document.head.appendChild(c);
c.onload = function() {
// Code for processing data
console.log(data);
}
}());
}
// Start Session Stack code
// Function to append Session Stack script
function(sessionStack) {
(function(){
var c=document.createElement('script');
c.setAttribute('data-stack', sessionStack);
document.head.appendChild(c);
c.onload = function() {
// Code for processing data
console.log(data);
}
}());
}
// End Session Stack code
// Function to append Session Stack script
function(sessionStack) {
(function(){
var c=document.createElement('script');
c.setAttribute('data-stack', sessionStack);
document.head.appendChild(c);
c.onload = function() {
// Code for processing data
console.log(data);
}
}());
}
// End Session Stack code
```
Figure 4.9: The Session Stack code present in the webpage head.
Figure 4.10: The Session Stack code present in the event listeners.
4.3.2 Versatility
Session Stack stores its sessions as HTML files. The session consists of a video and a log of events. The logs cannot be separated from the video natively. As a result, the data is not serializable, and to gain insight into user interaction each user session must be watched one at a time. The session can be downloaded as an HTML file, but the data is not easily parsable by other conventional analysis tools.
4.3.3 Data
The data captured by Session Stack reaches the entirety of the data hierarchy of web visualization. It is worth noting that in the event listener found in figure 4.10 D3 objects can be sent. DOM events, SVG data, and the underlying data can all be captured by the tool, however the limited versatility greatly impacts the use of this data.
4.3.4 Performance
Session Stack’s uniqueness attributes to much of the overhead associated with the tool. No other tool allows for the actual viewing of the user’s session, however, the sending of a screen capture requires more data to be sent to achieve a similar end when compared with other tools. Additionally, when viewing a user’s session, a large influx of events can cause stuttering. With this in mind, the delay between the user’s session and it being viewable from Session Stack was merely seconds.
4.3.5 Visualization
Session Stack does not have a built-in visualization tool.
4.3.6 Interface
The interface of Session Stack is very unique. A log of events appears on the left side of the screen that is searchable. On the right side of the screen, the user’s session video plays. Session Stack also allows for the viewing of all events that have occurred across all sessions. Both of these interfaces are shown in figures 4.11 and 4.12 below.
4.3.7 Price
Session Stack charges based on the number of sessions per month. This project used the free version, which allows for less than one thousand sessions a month. Prices increase to $99, $199, $399, and $599 for ten thousand, twenty-five thousand, one hundred thousand, and two hundred fifty thousand sessions per month, respectively [26].
4.4 Sight.js
Sight.js is a Javascript library being created by Lane Harrison for the purpose of extracting user interaction data for web visualizations.
4.4.1 Effort
As a library, Sight.js requires an import into the webpage in order to function. Like the other tracking tools, Sight.js utilizes event listeners on SVG elements to gather data, but rather than calling functions to send the data to a server, Sight.js exports locally.
4.4.2 Versatility
Sight.js exports all of the collected data to a .json file. This format allows for easily serializable data that can be read by most third party libraries or applications. The size of the file has the potential to become unwieldy as a direct result of the sheer number of events collected, which is worth bearing in mind when deciding on events to track.
4.4.3 Data
Sight.js reaches each layer of the data hierarchy of web visualizations. DOM events, SVG data, and the underlying data are all captured. A sample of the data output by Sight.js is shown in figure 4.13.
```json
{"sessionID":"jBzpn6808VweCrksAAAA",
"time":1512689700977,
"x-position":544,
"y-position":441,
"x-velocity":544,
"y-velocity":441,
"abs-velocity":708.2977937991808,
"type":"mousemove",
"packageName":"planets",
"className":"HD 4203 b",
"radius":"13.53",
"distance":"0.99",
"year":"2001",
"atmosphere":"hydrogen-rich",
"depth":1,
"value":183.06089999999998,
"r":16.06321449721978,
"x":406.31390787125713,
"y":126.2663301677847,
"element":"circle"},
```
Figure 4.13: A sample interaction log generated by Sight.js.
4.4.4 Performance
Sight.js has little overhead being a Javascript library. The data file is generated with a very short delay. The amount of data has little effect on the performance of Sight.js, as it does not need to read or visualize the data.
4.4.5 Visualization
Sight.js does not have a built-in visualization tool.
4.5. MIXPANEL
4.4.6 Interface
Sight.js does not have an interface.
4.4.7 Price
Sight.js does not currently have a price.
4.5 Mixpanel
Mixpanel is a business analytics tool aimed at tracking user interaction with webpages, then visualizing the resulting patterns and behaviors.
4.5.1 Effort
Mixpanel requires only two Javascript additions to a webpage to function. The first is a Javascript snippet in the head of the webpage to provide tracking information demonstrated in figure 4.15.
![Figure 4.14: Mixpanel’s Logo [3].](image)

The second addition required is an event listener on each SVG element to be tracked. This event listener calls a function which sends key-value pairs of labels and data to Mixpanel as seen in figure 4.16.
CHAPTER 4. COMPARING INSTRUMENTATION TOOLS FOR INTERACTIVE DATA VISUALIZATIONS
Figure 4.16: A sample code snippet present in the event listeners of the webpage for Mixpanel.
4.5.2 Versatility
Mixpanel can export data to a .csv file, which makes it easily incorporated to most third party libraries and applications for further analysis. The exported file provides a row for each event, making it serializable as well.
4.5.3 Data
Mixpanel is able to track each layer of the data hierarchy of web visualizations. DOM events, SVG data, and the underlying data can all be sent to the Mixpanel instance. However, each data field has to be individually recorded in a key-value pair. This requires each field of the underlying data to be manually labeled in the function that sends the data to Mixpanel. While not impossible, with large data, this has the potential to become an impractical task.
4.5.4 Performance
Mixpanel is able to handle a large number of events at one time, and does so with very little delay. With only one Javascript snippet in the head of the webpage, the overhead on the webpage itself is not very large.
4.5.5 Visualization
By far, Mixpanel has the most robust built-in visualization tool. It allows for grouping by multiple fields in bar charts, line charts, and tables. Examples of these visualizations are shown in figures 4.17, 4.18, 4.19.
4.5.6 Interface
Mixpanel’s interface has a moderately simple design, offering the user a lot of functionality with limited visual clutter. The interface is shown in figure 4.20.
4.5.7 Price
Mixpanel offers a free version, which was used for this project. The paid version allows for more data storage, additional features pertaining to the built-in visualizations, and the ability to export the data to .csv. The price for the StartUp package is $999 a year, and the Enterprise package price is customizable [27].
22
4.6. NUMERICAL EVALUATION OF ALL TOOLS
After an analysis of the evaluation criteria, numerical values were assigned for each tool's performance in the corresponding category. These numbers serve as a comparative measure to provide a more concrete reference point to distinguish the strengths, weaknesses, and capabilities of each of the tools. The rationale behind the numerical assignments of each category has been outlined in the previous section, and should be kept in mind when viewing the comparative measures found in table 4.2.
CHAPTER 4. COMPARING INSTRUMENTATION TOOLS FOR INTERACTIVE DATA VISUALIZATIONS
Figure 4.19: A line chart of events grouped by planet over time.
Figure 4.20: A portion of the Mixpanel interface.
This study does not posit that a particular tool is the definitively superior tool. Instead, this study seeks to inform the reader of the capabilities and shortfalls of the evaluated tools, to enable informed decisions. Each tool was constructed for a specific purpose, which influenced the design decisions of its creators, and ultimately the space in which the tool excels.
## 4.6. NUMERICAL EVALUATION OF ALL TOOLS
<table>
<thead>
<tr>
<th>Evaluation Metric</th>
<th>Google Analytics</th>
<th>Session Stack</th>
<th>Sight.js</th>
<th>Mixpanel</th>
</tr>
</thead>
<tbody>
<tr>
<td>Effort</td>
<td>5</td>
<td>4</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>Versatility</td>
<td>4</td>
<td>3</td>
<td>7</td>
<td>7</td>
</tr>
<tr>
<td>Data</td>
<td>4</td>
<td>6</td>
<td>7</td>
<td>6</td>
</tr>
<tr>
<td>Performance</td>
<td>6</td>
<td>3</td>
<td>7</td>
<td>6</td>
</tr>
<tr>
<td>Visualization</td>
<td>6</td>
<td>N/A</td>
<td>N/A</td>
<td>7</td>
</tr>
<tr>
<td>Interface</td>
<td>6</td>
<td>7</td>
<td>N/A</td>
<td>7</td>
</tr>
<tr>
<td>Price</td>
<td>4</td>
<td>7</td>
<td>N/A</td>
<td>5</td>
</tr>
</tbody>
</table>
**Table 4.2: Numerical Evaluation of Metrics**
5.1 Visualizing User Interaction Logs with Splunk
The following samples of visualizations of user interaction with the exoplanets visualization were achieved from using the recorded interaction data from Sight.js with Splunk, a log aggregation and visualization tool. Of the tools explored here, only Sight.js and Mixpanel have the ability to produce these visualizations, as they track all three data layers and have the versatility to export a file which can be used with other software or libraries to increase their effectiveness.
The table shown in figure 5.1 displays the data from the exoplanets visualization. All three layers of data are represented here: mouse events from the DOM, the locations of circles in the SVG, and the exoplanet data.

Early visualizations of the user interaction data focused on the physical location of a user's mouse. Interaction was being tracked, but very few meaningful conclusions could be drawn from these visualizations. A center of focus was the frequency with which mouse coordinates were being visited as shown in figure 5.2, as a way to see if users interacted with the visualization in straight line patterns.
5.1. VISUALIZING USER INTERACTION LOGS WITH SPLUNK
One of the initial visualizations, mouse X and Y coordinate frequencies.
Figure 5.2: One of the initial visualizations, mouse X and Y coordinate frequencies.
One of the more enduring visualizations was figure 5.3. This visualization shows to some extent the amount of time the user spent either on or between elements. The disparity between the number of mouse moves and other mouse actions can make this visualization somewhat tricky to read, as all the data falls on the same axis.
Figure 5.3: A timechart of actions the user took.
The concept of visualizing the tracing the user’s path came fairly early on as well. Initial implementations of this visualization, like the one shown in figure 5.4, was a reflection on the y-axis of the user’s actual interaction. On an SVG element, the coordinate (0, 0) is at the top left corner of the element. As the user progresses from left to right, the x coordinate increments. As the user progresses from top to bottom, the y coordinate increments. Since the Splunk visualization’s origin is placed in the bottom left corner, the resulting visualization was one which traced a reflected image of the path of the mouse. Once this error was noticed, it was fixed in future iterations.
An exploration was made into the relationship between statistics and the user’s interaction. A few different methods of representing the overall count of mouse events were tried. To obtain a more clear idea of the exact value for each event, the numbers themselves were displayed to avoid the issues that plagued the visualization in figure 5.3. Sparklines also proved to be an interesting addition to the simple number representation. In a similar vein, the average absolute velocity of the mouse was captured and represented numerically, matched with their respective session number. Examples of these numerical findings are shown in figure 5.5.
User sessions can also be compared to one another as a result of session identification numbers. Users with high levels of interaction can then have their sessions examined, seeing where the points of interest are. Users with low levels of interaction can then be compared, to determine if the same elements are interacted with. Comparisons like these could be used to categorize user
interests and initiate the process to raise engagement of low interaction users. An overview of user interactions broken down by sessions is shown in figure 5.6.
The theme of visualizations ended up shifting toward exposing interesting occurrences of user interaction, hoping to explain why the user acted the way that they did. An essential piece of that understanding is the inclusion of the underlying data into the visualizations. Rather than focusing exactly on where the mouse cursor was, attention was paid to the element the user had interacted with. By using all three levels present in the web visualization data hierarchy, figure 5.7 is able to show the most visited exoplanets. This is achieved by establishing the sequence of
5.2 Applications of Visualized User Interaction Logs
A mouse over, any number of mouse moves, and then a mouse out as being a unique visit to a planet. The exoplanet names are sent in the interaction logs when a user visits one of the circle SVG elements. Without all three data layers, this deep analysis is not possible.

**Figure 5.7: The top ten most visited planets.**
A number of other interesting features arose from the data. Figure 5.8 compares the radius of an exoplanet to the number of events fired.

**Figure 5.8: Exoplanet radius and count of DOM Events.**
In theory, if all exoplanets were of equal interest, exoplanets with larger radii would have more events occurring as a direct result of their surface area. In this user’s session, that was not the case. This begs the question, why was the user’s most interacted with planet not the largest? There is an entire range of potential answers. Perhaps the user was learning how the visualization worked by using that particular planet, or perhaps it was coincidence. From a single session, no concrete conclusions can be drawn. However, the large scale studies that can be enabled by user interaction logs have the potential to widen understanding of what users find interesting and engaging.
5.2 Applications of Visualized User Interaction Logs
The increase in visibility of user trends through the use of visualization allows for an easier understanding of data points that are of interest for users. The interpretations gained from
viewing user interaction can be used to tailor visualizations to specific audiences or demographics. Actionable goals to improve visualizations on the web can be made and implemented. Further examples of visualizations of user interactions can be found in Appendix A.
6.1 Patterns Across Solutions
When different people approach the same task, there are bound to be differences in the methods employed to accomplish it. This design space was no exception. Despite this, there were a large number of similarities in the approach taken by many of these tools. For both Google Analytics and Mixpanel, the goal of the tool is to track user interaction on the page in general and to find trends among users. Session Stack has a different focus, tracking the way users interact with a page to see what went wrong. Sight.js opts for specifically tracking user interaction within visualizations. These foci influence the implementation of the tool.
Across the different tools, similar themes occurred. Each tool, aside from Sight.js, had a web interface. This increases usability for those who are not domain experts, but increases the complexity of implementation, bouncing back and forth between the webpage code and the web interface. Another common thread across Google Analytics and Mixpanel was the implementation of visualization creation in the interface. Having a built-in visualization tool is helpful for simple and quick analysis of the data, however, they pale in comparison to dedicated visualization tools. As a result, to truly dig into the user’s interaction data, it becomes necessary to export the data from the tracking tool used. This flexibility in how the data is used became an essential piece in understanding the user’s motives, which not every tool provided.
Each of tools explored used event listeners to gain a clearer picture of the user’s interaction. For the three market solutions, functions that send the data to external servers are required. The three functions accepted different types of data: a predefined object, any number of key-value pairs, and any single object. At first glance, these three parameters do not seem to cause for much divergence. However, the ease of use to convey SVG and library level data vary greatly.
Google Analytic’s predefined object allowed for only the limited sending of some fields. In the study, the Event Label field of the Google Analytics object was used to send the exoplanet’s name, but conveying other data was either not possible or muddled the picture more by placing data where it did not necessarily belong. Mixpanel took a different approach, having data sent in key-value pairs in order for it to be filterable. This makes it very possible to collect DOM, SVG, and library level data, however, the person implementing Mixpanel needs prior knowledge of what data is desired. Each individual field can be sent with a label, but this adds either an additional layer of decision making or more work labelling every field. Session Stack was able to send the SVG object in question along with the bound data in a single object. This allows for all the data to be sent at once, rather than field by field. Sight.js uses the SVG object the same way, although it does not send it to an external server. This difference in data conveyance surprisingly goes a long way in easing implementation and improving the insights possible. Further tools developed in this space need to be able to send the SVG object with bound data without defining each field to be sent in order to achieve peak performance.
6.2 Insights from Exploring the Design Space
As a result of this study, an new awareness of the implications of the implementation of user interaction tracking was achieved. Among the forefront was the potential value of automatic instrumentation. Lowering the barrier of entry into obtaining feedback from users, while increasing the quantity of feedback increases the likelihood of its use. The implication is that every visualization on the web has the potential to practically have its own usability study, so that it can be incrementally improved over its lifespan.
Another interesting insight is the potential use as a mouse tracker to examine how users explore a webpage. A user’s path through a visualization was able to be traced and recreated, so in theory, a similar method could be employed to track webpage traversal. While mouse tracking is not a new concept, the binding of it with increased visibility into user interaction and data could provide an attractive pairing for web developers.
6.3 Benefits of a Visualization Specific Tool
Information about how a user interacts with a web visualization has a limited usefulness when viewed as just a list of events or fields. It is a natural progression to take the gathered interaction data and visualize it in some manner. As a result, the most important metrics of these interaction tracking tools are their versatility and quality of data collection. The entire purpose of this design space is to be able to analyse a user’s interaction with a visualization, a feat most easily achieved through a visualization specific tool.
It is beneficial for the tool to be aware of the type of data it is given, rather than agnostic. Not only does this awareness aid in parsing user interactions by being able to simply pass D3
objects, rather than the values for each field individually, but it also guarantees that all three layers of data will be captured. Only with a gestalten picture from the DOM, SVG elements, and underlying data can meaningful insights be garnered from user interaction.
7.1 Impact
Tracking of user interaction on web visualizations makes large scale asynchronous studies on the way people interact with visualizations feasible. Lowering the barrier to entry and difficulty to perform those types of studies would increase the rate of advancement in visualization. Web visualizations can more easily be tailored to users, through the use of this new feedback, and may result in higher visualization literacy through increased interest.
7.2 Conclusion
The design space of web visualization tracking is relatively barren. Not every existing tool is capable of gathering each level of the web visualization data hierarchy. All three layers are necessary to gain understanding of the user’s interaction and elicit feedback from the resulting data. In addition it needs to be serializable in order to perform large scale analysis. Without these key pieces, any user interaction data is functionally useless.
APPENDIX A: SPLUNK VISUALIZATIONS FROM SIGHT.JS DATA
Figure 1: Manhattan distance dissimilarity function plotted against count of DOM events.
Figure 2: A multiseries timechart of user interactions with the exoplanets visualization. Note that the use of multiple series aids readability when compared to figure 5.3.
Figure 3: Tracking of mouse movements and categorization by event type. This became the more robust implementation of figure 5.4.
Figure 4: Pie charts of DOM events occurring on individual planets.
[25] “Google analytics free and 360 comparison,” google.com/analytics/analytics/compare/.
|
{"Source-Url": "https://digitalcommons.wpi.edu/cgi/viewcontent.cgi?article=1023&context=mqp-all", "len_cl100k_base": 12963, "olmocr-version": "0.1.50", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 79699, "total-output-tokens": 16197, "length": "2e13", "weborganizer": {"__label__adult": 0.0003895759582519531, "__label__art_design": 0.0023517608642578125, "__label__crime_law": 0.000370025634765625, "__label__education_jobs": 0.007373809814453125, "__label__entertainment": 0.00025391578674316406, "__label__fashion_beauty": 0.00019860267639160156, "__label__finance_business": 0.0010004043579101562, "__label__food_dining": 0.0003261566162109375, "__label__games": 0.0008177757263183594, "__label__hardware": 0.0013151168823242188, "__label__health": 0.0005559921264648438, "__label__history": 0.0007143020629882812, "__label__home_hobbies": 0.00016045570373535156, "__label__industrial": 0.0004162788391113281, "__label__literature": 0.0007805824279785156, "__label__politics": 0.0002562999725341797, "__label__religion": 0.00043392181396484375, "__label__science_tech": 0.145751953125, "__label__social_life": 0.00023293495178222656, "__label__software": 0.08978271484375, "__label__software_dev": 0.74560546875, "__label__sports_fitness": 0.0002218484878540039, "__label__transportation": 0.0004260540008544922, "__label__travel": 0.000255584716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61661, 0.07087]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61661, 0.40256]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61661, 0.89606]], "google_gemma-3-12b-it_contains_pii": [[0, 592, false], [592, 970, null], [970, 970, null], [970, 1175, null], [1175, 2385, null], [2385, 5528, null], [5528, 5612, null], [5612, 5760, null], [5760, 6282, null], [6282, 7228, null], [7228, 7228, null], [7228, 9037, null], [9037, 10412, null], [10412, 13709, null], [13709, 14049, null], [14049, 15821, null], [15821, 17190, null], [17190, 18843, null], [18843, 19613, null], [19613, 22716, null], [22716, 25571, null], [25571, 25875, null], [25875, 26485, null], [26485, 28682, null], [28682, 30537, null], [30537, 32305, null], [32305, 32600, null], [32600, 34427, null], [34427, 36521, null], [36521, 36958, null], [36958, 38394, null], [38394, 39230, null], [39230, 41125, null], [41125, 41662, null], [41662, 42235, null], [42235, 43029, null], [43029, 44273, null], [44273, 46590, null], [46590, 47330, null], [47330, 48952, null], [48952, 49220, null], [49220, 51212, null], [51212, 54310, null], [54310, 54579, null], [54579, 55514, null], [55514, 55831, null], [55831, 56030, null], [56030, 57707, null], [57707, 60428, null], [60428, 61661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 592, true], [592, 970, null], [970, 970, null], [970, 1175, null], [1175, 2385, null], [2385, 5528, null], [5528, 5612, null], [5612, 5760, null], [5760, 6282, null], [6282, 7228, null], [7228, 7228, null], [7228, 9037, null], [9037, 10412, null], [10412, 13709, null], [13709, 14049, null], [14049, 15821, null], [15821, 17190, null], [17190, 18843, null], [18843, 19613, null], [19613, 22716, null], [22716, 25571, null], [25571, 25875, null], [25875, 26485, null], [26485, 28682, null], [28682, 30537, null], [30537, 32305, null], [32305, 32600, null], [32600, 34427, null], [34427, 36521, null], [36521, 36958, null], [36958, 38394, null], [38394, 39230, null], [39230, 41125, null], [41125, 41662, null], [41662, 42235, null], [42235, 43029, null], [43029, 44273, null], [44273, 46590, null], [46590, 47330, null], [47330, 48952, null], [48952, 49220, null], [49220, 51212, null], [51212, 54310, null], [54310, 54579, null], [54579, 55514, null], [55514, 55831, null], [55831, 56030, null], [56030, 57707, null], [57707, 60428, null], [60428, 61661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61661, null]], "pdf_page_numbers": [[0, 592, 1], [592, 970, 2], [970, 970, 3], [970, 1175, 4], [1175, 2385, 5], [2385, 5528, 6], [5528, 5612, 7], [5612, 5760, 8], [5760, 6282, 9], [6282, 7228, 10], [7228, 7228, 11], [7228, 9037, 12], [9037, 10412, 13], [10412, 13709, 14], [13709, 14049, 15], [14049, 15821, 16], [15821, 17190, 17], [17190, 18843, 18], [18843, 19613, 19], [19613, 22716, 20], [22716, 25571, 21], [25571, 25875, 22], [25875, 26485, 23], [26485, 28682, 24], [28682, 30537, 25], [30537, 32305, 26], [32305, 32600, 27], [32600, 34427, 28], [34427, 36521, 29], [36521, 36958, 30], [36958, 38394, 31], [38394, 39230, 32], [39230, 41125, 33], [41125, 41662, 34], [41662, 42235, 35], [42235, 43029, 36], [43029, 44273, 37], [44273, 46590, 38], [46590, 47330, 39], [47330, 48952, 40], [48952, 49220, 41], [49220, 51212, 42], [51212, 54310, 43], [54310, 54579, 44], [54579, 55514, 45], [55514, 55831, 46], [55831, 56030, 47], [56030, 57707, 48], [57707, 60428, 49], [60428, 61661, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61661, 0.26008]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
30d63f804cb9b9cf7925016d110242f4245a96fb
|
Developer-Related Factors in Change Prediction
An Empirical Assessment
Catolino, Gemma; Palomba, Fabio; De Lucia, Andrea; Ferrucci, Filomena; Zaidman, Andy
DOI
10.1109/ICPC.2017.19
Publication date
2017
Document Version
Accepted author manuscript
Published in
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.
Developer-Related Factors in Change Prediction: An Empirical Assessment
Gemma Catolino¹, Fabio Palomba², Andrea De Lucia³, Filomena Ferrucci¹, Andy Zaidman²
¹University of Salerno — ²Delft University of Technology
gcatolino@unisa.it, f.palomba@tudelft.nl, adelucia@unisa.it, fferrucci@unisa.it, a.e.zaidman@tudelft.nl
Abstract—Predicting the areas of the source code having a higher likelihood to change in the future is a crucial activity to allow developers to plan preventive maintenance operations such as refactoring or peer-code reviews. In the past the research community was active in devising change prediction models based on structural metrics extracted from the source code. More recently, Elish et al. showed how evolution metrics can be more efficient for predicting change-prone classes. In this paper, we aim at making a further step ahead by investigating the role of different developer-related factors, which are able to capture the complexity of the development process under different perspectives, in the context of change prediction. We also compared such models with existing change-prediction models based on evolution and code metrics. Our findings reveal the capabilities of developer-based metrics in identifying classes of a software system more likely to be changed in the future. Moreover, we observed interesting complementarities among the experimented prediction models, that may possibly lead to the definition of new combined models exploiting developer-related factors as well as product and evolution metrics.
Keywords—Change prediction; Empirical Studies; Mining Software Repositories;
I. INTRODUCTION
During software maintenance and evolution, change is the rule rather than the exception [1]. Classes undergo frequent modifications due to continuous change requests, lack of a deep understanding of the requirements or lack of communication with the stakeholders [1]. In such a scenario, and because the need of meeting strict deadlines, software developers often perform maintenance activities in an uncontrolled manner, leading to the erosion of the original design and, thus, reducing the quality of a software system [2].
Knowing in advance the code elements potentially exhibiting a higher change-proneness is vital for developers for two main reasons: on the one hand, change-proneness can be considered as a quality indicator that can be used to warn developers when touching code that should be refactored [3]; on the other hand, developers can plan preventive maintenance operations, such as refactoring [4], peer-code reviews [5], and testing [6], aimed at increasing the quality of the code and reducing future maintenance effort and costs [4].
Change prediction is widely recognized as an effective technique to identify the classes more prone to be modified in the future, being able to help developers in both planning preventive maintenance actions and understanding the complexity of source code [7]. For this reason, researchers devoted a lot of attention to the problem by (i) analyzing the factors influencing the change-proneness of classes [6], [8], [9], [10], [11], and (ii) devising prediction models able to alert developers about the classes on which preventive actions should be focused on [12], [13], [14], [15].
Most of the previous work relied on product metrics (e.g., the Chidamber and Kemerer metrics [16]) as indicators of the change-proneness of classes. The underlying assumption is that code elements having low quality are more prone to be subject of changes in the future. For example, Zhou et al. [3] investigated which cohesion, coupling, and inheritance metrics are more suitable for predicting change-prone classes, finding a subset of them that should be used in the context of change prediction models. At the same time, they also showed that the number of lines of code is not a good predictor [3].
More recently, Elish et al. [17] started investigating the role of process metrics as predictors of change-prone classes. To this aim, they theoretically and empirically evaluated a new set of metrics (called “evolution metrics”) that characterized the history of a class in order to delineate its future change-proneness. For instance, they considered the number of previous modifications a class underwent during a given time period. The application of a prediction model based on such new metrics produces more accurate predictions than the ones provided when using the traditional code metrics suggested by Zhou et al. because of the direct relationship existing between previous and future modifications of a class [17].
Although Elish et al. exploited some process metrics, they did not take into account developer-related factors that considering how developers apply changes in the source code could be able to capture the complexity of the development process. For instance, it is still unclear whether non-focused developers that apply scattered changes over the entire system tend to introduce maintainability pitfalls that lead to increase the change-proneness of the modified classes. Our conjecture is that such aspects can be a useful source of information to predict classes more likely to be changed in the future. In this paper, we aim at verifying our conjecture by studying the role of the metrics measuring the complexity of the development process in change prediction. In our study we investigated three prediction models previously defined in literature each one based on metrics that capture the complexity of the development process under different perspectives, i.e., (i) the Basic Code Change Model (BCCM) proposed by Hassan [18] which relies on the entropy of changes applied by developers, (ii) the Developer Changes Based Model (DCBM) devised by
Di Nucci et al. [19] that considers to what extent developers apply scattered changes in the system, and (iii) the Developer Model (DM) proposed by Bell et al. [20] which analyzes how many developers touched a code element over time.
Even though the models that we investigate have originally been proposed for fault prediction, we conjecture they can be adopted in the change prediction context since they are based on metrics able to influence the change-proneness of classes as well. For instance, the lack of coordination between multiple developers working on the same code element may lead to the introduction of design pitfalls that negatively influence the maintainability of source code [21], possibly making it more change-prone. In order to assess the performance of the three prediction models we employed ten open source software systems with different size scope.
Moreover, to have a comprehensive view of the usefulness of the experimented models, we compared their performance with the ones achieved by the state-of-the-art change prediction models proposed by Elish et al. [17] and Zhou et al. [3].
The results of our study highlight the good prediction capabilities of the experimented prediction models, which range between 60% and 78% in terms of accuracy. In particular, we observed that the best performance is achieved by the model defined by Di Nucci et al. [19]. When compared to the model exploiting the evolution metrics devised by Elish et al., DCBM still produces better performances. This result highlights how previous changes of a class are not enough for adequately predicting its future change-proneness, while measuring the complexity of the development process can give more accurate predictions. Furthermore, the change prediction model relying on code metrics achieved the worst performance (accuracy=57%), indicating that structural analysis is not sufficiently suitable for predicting change-prone classes.
Finally, all the experimented prediction models showed interesting complementarities in the set of change-prone classes correctly predicted. Indeed, different models capture different change-prone instances, possibly indicating that better prediction abilities can be obtained by combining the predictors used by the experimented models.
Structure of the paper. Section II discusses the related literature in the context of change prediction. In Section III the design of the empirical study is described, while Section IV reports the results achieved when evaluating the performances of the experimented change prediction models. Section V discusses the threats that could affect the validity of our study. Finally, Section VI concludes the paper.
II. RELATED WORK
The analysis of the change-proneness of classes has been explored by the research community from two main perspectives. A consistent body of research analyzed the factors influencing the phenomenon [8], [9], [10], [11], [6], while others focused on understanding the role of product and evolution metrics to predict the future change-proneness of classes [22], [20], [19], [23], [17]. Since this paper is about change prediction models, in the following we summarize the related literature on previous research in this branch.
Product metrics have been widely exploited in the context of change prediction [24]. Lindvall [25] found that larger classes are statistically more change-prone than classes having a small size, and that developers tend to apply more changes to such classes during maintenance and evolution [26]. Further studies showed that coupling metrics are relevant measures to estimate the changeability of source code [27], [28], [29], while Chaumun et al. [30] and Tsantalis et al. [23] generalized the usefulness of CK metrics [16] for change prediction. The statistical analyses conducted by Lu et al. [31] and Malhotra et al. [32] clarified which Object Oriented metrics are better suited for change prediction, reporting a set of cohesion, coupling, and inheritance metrics that should be used in this context. On the basis of these results, several prediction models based on product metrics have been devised. Romano et al. [33] relied on code metrics for predicting change-prone fat interfaces, while Eski et al. [34] proposed a model based on both CK and QMOOD metrics [35] to estimate change-prone classes and to determine parts which should be tested first and more deeply.
Other previous research tried to estimate the change-proneness of classes using alternative methodologies. For instance, the combination between dependencies mined from UML diagrams [36] and code metrics has been proposed [12], [13], [14], [15]. Also genetic and learning algorithms have been proposed in this context [37] [38] [39]. Specifically, Malhotra et al. [37] validated the CK metrics suite for building an efficient software quality model which predict change prone classes with the help of Gene Expression Programming. Marinescu [38] reported the goodness of GAs for both change- and fault-prediction. Finally, Peer et al. [39] devised the use of adaptive neuro-fuzzy inference system (ANFIS) to estimate the change-proneness of classes.
Later on, Zhou et al. [3] showed that size metrics may lead to multi-collinearity [40] when mixed together with other cohesion and coupling metrics. As a result, they suggested to avoid using the LOC metric in product-based change prediction models [3].
The closest works to the one proposed in this paper are the studies by Elish et al. [17] and Girba et al. [41]. Elish et al. [17] reported the potential usefulness of evolution metrics for change prediction. In particular, they defined a set of historical metrics such as (i) the birth date of a class, (ii) the total amount of changes applied in the past, and (iii) the date of the first and the last modification applied on a class. Their findings showed how such evolution metrics may be useful for predicting change-prone classes. Girba et al. [41] defined a tool that suggests change-prone code elements by summarizing previous changes. In a small-scale empirical study involving two systems, they observed that previous changes can effectively predict future modifications.
Besides the evolution metrics defined by Elish et al. [17] and Girba et al. [41], in this paper we also analyzed the role of developer-related factors that have been shown to be relevant.
for prediction purpose in other contexts [19].
III. E M P I R I C A L S T U D Y D E F I N I T I O N A N D D E S I G N
The goal of the empirical study is to evaluate to what extent
metrics capturing the complexity of the development process are
useful when discovering change-prone source code classes, with
the purpose of improving the allocation of resources in
preventive maintenance activities (e.g., refactoring, code
inspections etc.) focusing on classes having a higher change-
proneness. The quality focus is on the prediction performance
and complementarity between the investigated approaches, while
the perspective is of researchers who want to evaluate the
effectiveness of using developer-related factors when
identifying change-prone classes.
The context of the study consists of ten open source
software systems having different size scope. Table I reports
the characteristics of the considered systems, and in particular
(i) the software history that we investigated, (ii) the percentage
of change-prone classes identified (as explained later), and (iii)
the size in terms of number of commits, developers, classes,
methods, and KLOC.
The specific research questions formulated in this study are
the following:
- **RQ1**: To what extent are developer-based prediction
models able to correctly estimate the change-proneness
of classes?
- **RQ2**: How does the performance of developer-based
prediction models differ from the ones of existing change
prediction models?
- **RQ3**: To what extent are developer-based change prediction
models complementary to existing change prediction
models?
To answer RQ1 and understand the predictive power of
developer-related factors in change prediction, we decided to
test the performance of three prediction models (we refer to
them as developer-based model since they rely on developer
related factors):
1) The Basic Code Change Model (BCCM) defined by
Hassan [18], which relies on the entropy of changes
applied by developers in a time window of size \( \alpha \).
2) The Developer Changes Based Model (DCBM) pro-
posed by Di Nucci et al. [19]. It employs the structural
and semantic scattering of the developers that worked on
a code element in a time window of size \( \alpha \) as predictiors.
The structural scattering measures the distance between
every pair of classes modified by the developer, while
the semantic scattering computes the degree of textual
similarity between every pair of classes modified by the
developer.
3) The Developer Model (DM) devised by Bell et al. [20]
that takes into account the number of developers that
worked on a specific component of source code in a
time period of size \( \alpha \).
While such models have originally been defined in the
context of fault prediction, the choice of using them for change
prediction was guided by the will of exploring the role of
different aspects of the development process on the change-
proneness of classes. For instance, having a high entropy of
changes might indicate the presence of a complex development
process where developers apply changes in an undisciplined
manner that lead to source code that is less maintainable and
possibly more change-prone in the future.
Once chosen the baseline prediction models i.e., BCCM,
DCBM, DM, the subsequent step regarded the identification
of the machine learning technique to use for building the
change prediction models. The related literature proposed
several alternatives (e.g., Tsantalis et al. [23] relied on Logistic
Regression [42], while Romano and Pinzger [33] suggested the
use of Support Vector Machine [43]), however it is still unclear
which classifier is able to give the best overall performance.
For this reason, we experimented with several classifiers
previously used for prediction purposes from the research
community, i.e., ADTree [44], Decision Table Majority [45],
Logistic Regression [42], Multilayer Perceptron [46], Support
Vector Machine [43], and Naïve Bayes [47]. We empirically
compared the results achieved when applying each classifier
on each experimented baseline model on the software systems
in our study (more details on the adopted procedure later in
this section), finding that Logistic Regression [42] provided the
best performances for all the tested prediction models. Thus,
in this paper we report the results of the models built with this
classifier. A comprehensive report of the analysis conducted
in order to identify the machine learning technique to use is
reported in online appendix [48].
To assess the performance of the three prediction models,
we split the evolution history of the subject systems into three-
month time periods and we adopted a three-month sliding
window to train and test the change prediction models. Specif-
ically, starting from the first time window \( TW_1 \) (i.e., the one
starting from the first commit), we train each model on it,
and test its performances on the time window \( TW_2 \) (i.e., the
subsequent three-month period). Then, we moved three months
forward to the next time window, training the classifier using
the data available in \( TW_2 \) and testing the model on \( TW_3 \). This
process has been repeated until the end of the evolution history
of the subject systems.
The choice of the validation methodology was based on
two aspects. Firstly, all the models refer to a specific time
window of size \( \alpha \) in which their own predictors have to
be computed. Therefore, this validation technique better fits
the characteristics of the experimented models. Secondly, this
methodology has been widely used in recent years to test the
performance of prediction models [18], [19]. Moreover, the
choice of considering three-month periods is based on (i) the
results of previous work, such as the one by Hassan [18], and
(ii) the findings of the empirical assessment we performed
on such a parameter, which showed that the best results for
all experimented techniques are achieved when using three-
month periods. In particular, we tested time windows of size
\( \alpha = 1, 2, 3, 6 \) months. A report of the results is available in
replication package [48].
To measure the ability of the change prediction models in correctly predicting change-prone classes, we needed an oracle reporting the actual change-prone classes present in each of the time windows analyzed. To the best of our knowledge, a public oracle reporting the ground-truth for the phenomenon taken into account is not available in literature. Thus, we needed to build our own oracle. To this aim, we followed the guidelines provided by Romano et al. [33], which considered a class change-prone if, in a given time period $TW$, it underwent a number of changes higher than the median of the distribution of the number of changes experienced by all the classes of the system. We made the oracle reporting the change-prone classes of all the ten considered systems publicly available in online appendix [48].
Once we defined the oracle and ran the prediction models on every three-month window, we answered RQ$_1$ by using three widely-adopted Information Retrieval metrics, namely accuracy, precision and recall [49]. As an aggregate indicator of precision and recall, we also reported the F-measure, a metric defined as the harmonic mean of precision and recall [49]. In addition, we reported the Area Under the ROC Curve (AUC-ROC) obtained by the experimented prediction models. This metric quantifies the overall ability of a prediction model to discriminate between change-prone and non-change-prone classes. The closer the AUC-ROC to 1, the higher the ability of the classifier to discriminate between change-prone and non-change-prone classes. To this aim, we exploited the overlap metric defined as follows:
$$\text{overlap}(i,j) = \frac{\|i \cap j\|}{\|i\| + \|j\| - \|i \cap j\|}$$
where $i$ and $j$ are two classes, $\|i\|$ and $\|j\|$ are the number of changes of class $i$ and $j$, respectively, and $\|i \cap j\|$ is the number of changes of both classes $i$ and $j$.
Basic Code Change Model
It relies on the number of developers who modified a code component in a given time period.
Code Metrics Model
It is based on the entropy of changes applied by developers in a given time period.
Developer Model
where
\[ \text{corr}_{m_i \cap m_j} = \frac{|\text{corr}_{m_i} \cap \text{corr}_{m_j}|}{|\text{corr}_{m_i} \cup \text{corr}_{m_j}|} \% \]
\[ \text{corr}_{m_i \setminus m_j} = \frac{|\text{corr}_{m_i} \setminus \text{corr}_{m_j}|}{|\text{corr}_{m_i} \cup \text{corr}_{m_j}|} \% \]
where \( \text{corr}_{m_i} \) represents the set of change-prone classes correctly classified by the prediction model \( m_i \).
IV. ANALYSIS OF THE RESULTS
In this section we report the results achieved in the study, discussing the performance of the investigated models, and the complementarity between them.
A. RQ1: The Performances of Developer-based Models
Table III reports the performance achieved by the five investigated change prediction models over the ten considered subject systems. Looking at the table, we can immediately provide quantitative answers to our first research question. In the first place, while developer-based models tend to perform well, it is worth noting that none of them achieves an overall accuracy higher than 78%. Even if this value is still quite positive, it is also important to highlight that a notable percentage of classes (at least 22%) is not correctly classified by using the models independently. Thus, the problem of identifying the change-proneness of classes seems to be not easily addressable by employing models counting single aspects of the development process.
Among the three developer-based models investigated, DCBM [19] tends to perform better than the others, achieving the best scores in term of all the quality metrics computed, i.e., accuracy=78%, precision=61%, recall=79%, F-Measure=66%, AUC-ROC=69%. Based on these results, we can claim that the way developers apply changes in the system has an influence of the likelihood to make the touched classes more change-prone. The superiority of DCBM is particularly evident in the comparison with the DM model (i.e., the model based on the number of developers), where the F-Measure is 8% higher. The result clearly highlights that it is not the simple number of developers working on a class that influences the change-proneness, but rather the way developers apply (scattered) changes in the system. Our findings confirm, in the context of change prediction, previous findings achieved by Di Nucci et al. [19], which showed the superiority of the DCBM model in predicting bugs. For instance, consider the case of the class org.gjt.sp.BufferHistory of the JEDIT system. Between August and October 2009 (i.e., one of the three-month periods considered in our study) the class was modified 19 times by one developer. The DM model predicted the class as non-change-prone. However, in the same time period such developer performed 36 modifications over five different packages, thus accumulating a high level of both semantic and structural scattering. The scattered changes applied by the developer led to a decreasing of the cohesion of the modified classes (i.e., overall, the LCOM\(^1\) increases of 16% in such classes): interestingly, the LCOM of the class org.gjt.sp.BufferHistory is the one increasing more (from 3 to 12). This made it more prone to be changed since they encapsulated different responsibilities. Due to the high scattering of the developer, DCBM correctly predicts the change-proneness of the class. Thus, the results seem to delineate that the scattered changes applied by developers can produce some forms of software degradation that have effects on the change-proneness of classes. The statistical analyses conducted (see Table IV) confirm the superiority of DCBM with respect to DM (\(\alpha < 0.01, d = 0.81\)).
A similar discussion can be made when comparing the DCBM and BCCM models. From Table III we can observe that DCBM is able to obtain an F-Measure almost 8% higher than the alternative model. Once again, the improvement is statistically significant (\(\alpha < 0.01\)) with a large effect size (\(d = 0.73\)). The gain provided by DCBM is also visible when considering the other evaluation metrics: for instance, the accuracy is about 3% higher, while the recall 7%. Interestingly, both models obtain the same level of AUC-ROC (69%). From a practical point of view, this result indicates that DCBM and BCCM have a comparable overall ability in distinguishing those classes having a high change-proneness with respect to those characterized by a low change-proneness. However, the scattering metrics can capture the phenomenon with a higher accuracy. This is due to the fact that DCBM works at a higher abstraction level than BCCM [18]. Specifically, it considers the way developers apply changes rather than the changes themselves, allowing the model to be more efficient when the change process is not chaotic, but developers continuously perform modifications over different parts of the system. To better understand the reasons behind the different performances of these models, let us consider the case of the class chartMeter.legend belonging to the JFREECHART system. Between April and June 2005, the class underwent 10 of the total 16 changes applied in that time window. In this case, the entropy of changes involving this class is low (i.e., \(-0.13\)), since most of the effort has been devoted to maintain it. However, the two developers performing mod-
\(^1\)Note that the lower the LCOM the higher the cohesiveness of a class.
TABLE III: Performances (in percentage) achieved by the investigated change prediction models.
<table>
<thead>
<tr>
<th>Project</th>
<th>BCCM</th>
<th>DCBM</th>
<th>DM</th>
<th>EM</th>
<th>CM</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>F</td>
<td>R</td>
<td>F-M</td>
<td>AR</td>
<td>A</td>
</tr>
<tr>
<td>ArgouML</td>
<td>89</td>
<td>87</td>
<td>88</td>
<td>97</td>
<td>87</td>
</tr>
<tr>
<td>Apache Ant</td>
<td>72</td>
<td>65</td>
<td>79</td>
<td>82</td>
<td>71</td>
</tr>
<tr>
<td>Apache Cassandra</td>
<td>88</td>
<td>79</td>
<td>85</td>
<td>92</td>
<td>91</td>
</tr>
<tr>
<td>Apache Xerces</td>
<td>76</td>
<td>69</td>
<td>73</td>
<td>72</td>
<td>65</td>
</tr>
<tr>
<td>JRunTime</td>
<td>62</td>
<td>66</td>
<td>58</td>
<td>59</td>
<td>63</td>
</tr>
<tr>
<td>FreeMind</td>
<td>35</td>
<td>35</td>
<td>38</td>
<td>31</td>
<td>42</td>
</tr>
<tr>
<td>JEdit</td>
<td>75</td>
<td>48</td>
<td>53</td>
<td>61</td>
<td>53</td>
</tr>
<tr>
<td>JHotChart</td>
<td>71</td>
<td>45</td>
<td>67</td>
<td>56</td>
<td>55</td>
</tr>
<tr>
<td>OpenDraw</td>
<td>97</td>
<td>77</td>
<td>59</td>
<td>67</td>
<td>79</td>
</tr>
<tr>
<td>JVLT</td>
<td>80</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
</tr>
</tbody>
</table>
TABLE IV: Wilcoxon’s t-test p-values of the hypothesis F-Measure achieved by a model is > than the compared model. Statistically significant results are reported in bold face. Cliff Delta d values are also shown.
<table>
<thead>
<tr>
<th>Compared models</th>
<th>p-value</th>
<th>Cliff Delta</th>
<th>Magnitude</th>
</tr>
</thead>
<tbody>
<tr>
<td>DCBM - BCCM</td>
<td>< 0.01</td>
<td>0.73</td>
<td>large</td>
</tr>
<tr>
<td>DCBM - DM</td>
<td>< 0.01</td>
<td>0.81</td>
<td>large</td>
</tr>
<tr>
<td>DCBM - EM</td>
<td>< 0.01</td>
<td>0.82</td>
<td>large</td>
</tr>
<tr>
<td>DCBM - CM</td>
<td>< 0.01</td>
<td>0.84</td>
<td>large</td>
</tr>
<tr>
<td>BCCM - DM</td>
<td>0.07</td>
<td>0.35</td>
<td>medium</td>
</tr>
<tr>
<td>BCCM - EM</td>
<td>0.04</td>
<td>0.21</td>
<td>small</td>
</tr>
<tr>
<td>BCCM - CM</td>
<td>< 0.01</td>
<td>0.74</td>
<td>large</td>
</tr>
<tr>
<td>DM - EM</td>
<td>0.94</td>
<td>0.09</td>
<td>negligible</td>
</tr>
<tr>
<td>DM - CM</td>
<td>0.03</td>
<td>0.44</td>
<td>medium</td>
</tr>
<tr>
<td>EM - CM</td>
<td>0.03</td>
<td>0.48</td>
<td>large</td>
</tr>
</tbody>
</table>
Summary for RQ1. The investigated developer-based models achieve quite positive results. Among them, the prediction model relying on scattering metrics obtains the highest performances, having an overall F-Measure equal to 66% and an accuracy equals to 78%. The superiority of DCBM is statistically significant and has a large effect size when compared to all the other models.
B. RQ2: The Comparison between Developer-based and State-of-the-art Models
The results achieved by the baseline change prediction models investigated in this study (i.e., EM and CM) are reported in Table III. As it is possible to see, the EM model achieves the same overall F-Measure as the DM and BCCM models (i.e., 58%), while it is always outperformed by the DCBM model (-8% in terms of F-Measure). From Table IV we can observe how the differences between EM and the other developer-based models are often small or negligible, even if mostly statistically significant. The only exception regards the comparison between DCBM and EM, where the differences are statistically significant (α < 0.01) and the magnitude is large (d = 0.82). When considering the CM model, we can see that EM is generally a better predictor (the overall F-Measure is 58% vs 51% for EM and CM, respectively) and, indeed, the results are statistically confirmed (α = 0.03, d = 0.48).
Generally, it is important to remark that EM is the only model that directly measures the previous number of changes of a class to predict its future change-proneness; our results indicate that this feature is not able to characterize the future change-proneness of classes better than other predictors. This confirms previous findings by Ekanayake et al. [52] on the variability of the change-proneness of classes during different stages of software evolution. As a consequence, the previous knowledge about the number of changes a class underwent is not always suitable to correctly identify change-prone classes in future versions of a software system. Further analyzing the predictions provided by EM, we discovered that it is generally effective when a class has a central role in the architecture of a system and, as such, usually undergoes a high number of changes. For example, in the JHotDraw system, the class svg.io.SVGFigureFactory is responsible for performing the main functionality of the entire project, i.e., it manages the graph creation. This class is present in the
system since its first commit and it was frequently modified by developers among all the time windows analyzed. In this case, the predictors used by the EM model (e.g., previous changes and birth date) are particularly effective since they characterize well the change-proneness of the class. On the other hand, the performance decreases in cases where a significant restructuring of the system’s architecture is applied, since the responsibilities of several code artifacts are modified and, therefore, predictors such as the birth date or the previous changes are less meaningful. For instance, in the time window ranging between December and February 2006 the Apache Ant developers performed an entire restructuring of the system, which led to the removal of some old classes as well as the re-distribution of the responsibilities of several code artifacts. As a consequence, the data considered by the EM model was not sufficient to correctly predict the change-proneness of classes: in fact, the accuracy achieved by the model in that time window was 43%. Noticeably, in the same time period the DCBM and DM models reached an accuracy equal to 87% and 83%, respectively. As expected, in the considered period the developers was busy modifying the source code and, thus, models relying on such information were performing better.
On the one hand, our results confirm previous findings on the potential usefulness of the evolution metrics in the context of change prediction [17]. On the other hand, we also found how the “change-caching” concept exploited by this model is valid on classes having a central role in the system, while it has less effect in other cases. At the same time, we showed that (i) other metrics based on developers can be effectively used for prediction purpose, and (ii) they seem to capture information orthogonal with respect to the EM model.
Switching the attention to the results obtained by the model relying on code metrics, we can observe that developer-based prediction models generally obtain higher performances than the product-based baseline. Indeed, all these models have an overall F-Measure always higher than the CM model. For instance, DCBM achieves, overall, an F-Measure 15% higher than the model based on code metrics (66% vs 51%). The superiority of DCBM is also confirmed when considering all the other evaluation metrics, i.e., accuracy=+18%, precision=+8%, recall=+20%, AUC-ROC=+10%. This result contradicts previous findings [3], [31], demonstrating that the use of code metrics is not enough to efficiently predict change-prone classes. A clear example is represented by the class xerces.dom.ElementImpl of the Apache Xerces project. During the time window between May and July 2007, the class experienced only three changes (i.e., it is non-change-prone) applied by two different developers, who focused all their activities on the maintenance of classes belonging to the xerces.dom package. As a consequence, the value of their scattering metrics is zero, since they never performed modifications outside the scope of the package.
Thus, the DCBM model correctly marked this class as non-change-prone. At the same time, the class has an LCOM=28 and a CBO=7. Both the metrics are higher than the average metric values of the other classes composing the system, and for this reason the CM model wrongly marked the class as change-prone. This example highlights an important aspect related to the maintainability of source code: indeed, even if the code may be considered poorly maintainable looking at the values of code metrics, this seems to be not always a real issue since developers performing focused maintenance activities (thus, being more expert on the modified code) can keep its complexity under control.
**Summary for RQ2.** Developer-based prediction models generally perform better than the existing models. This is particularly true when considering the DCBM model, which has an overall F-Measure 15% higher than the CM model and 8% higher than EM.
C. **RQ3: The Complementarity of the Investigated Models**
Table V reports the complementarity between each pair of prediction models. Note that for sake of space limitations, the results on the complementarity have been aggregated by considering the overall overlap between the models. A complete report of the findings on each system is available in online appendix [48].
As it is possible to observe from the table, all the investigated prediction models are complementary to each other, thus being able to correctly point out different sets of change-prone classes. To better understand the reasons behind such complementarities, we deeper analyzed the predictions provided by different models. Firstly, it is worth discussing the complementarity between DCBM and the other models. When considering the relationship between scattering and code metrics, we observed a consistent set of change-prone classes (i.e., 43%) classified by both the prediction models, but at the same time in almost 40% of the cases the only model able to correctly predict the change-proneness is the DCBM model. Finally, 27% of change-prone classes have been identified only using code metrics. This result highlights the high complementarity between the two models, showing that different predictors work well on different sets of classes.
As for the comparison between DCBM and DM, we observe that 51% of the predicted change-prone classes are in the intersection, while 39% of change-prone classes are detected correctly by only the DCBM model. Finally, the change-proneness of a smaller percentage of classes (10%) can be solely detected using the DM model. Thus, the two models partially complement each other, making prediction improvements conceivable. An interesting case explaining when the DM model is able to outperform the DCBM model can be found in the Freemind project (the smallest one of our dataset). Here the seven developers of the system often perform changes to a few classes located in the two core packages. Due to the small structure of the system, the scattering metrics
---
2 As indicated in the release notes of the version 1.7.1, which correspond to that time period: http://tinyurl.com/hqwazgg
cannot correctly capture the developers’ activities and, thus, they always have low values. In such case, the DM model produces more reliable predictions: indeed, it is worth noting that this project is the only one where the DM model performs better than the DCBM one (see Table III).
The discussion is similar when comparing the DCBM and BCCM models. Even if the model based on scattering metrics generally achieved better performance than the BCCM model (Table III), we observed an interesting complementarity that may lead to an additional improvement in the prediction through a combination. In fact, Table V shows that the change-proneness of almost 37% of classes can be correctly detected by only one of the two models (i.e., 23% of correct prediction have been made only by DCBM, 14% only by BCCM). Moreover, it is worth noting that the complementarity between BCCM and the other models is high as well. For instance, when compared to the CM model, we found 28% of correct predictions performed by the BCCM only and a further 19% of classes for which the change-proneness has been identified using code metrics. An interesting example is represented by the class thrift.CassandraServer which had a value of LCOM=44 and an RFC=23 in the time window between March and May 2010. In that period, this class has been changed 13 times, being classified as an actual change-prone class. However, the BCCM model was not able to correctly mark its change-proneness because the class always changed together with a few other classes of the system (on average, 2 classes). As a consequence, the entropy of changes is low. On the other hand, the poor quality of the class was a relevant indicator of the change-proneness. Furthermore, it is important to note that also the evolution metrics have nice complementarities with the other models. For instance, when comparing EM and BCCM, we observed that in 26% of the cases the change-proneness of classes can be correctly identified by the EM model only. At the same time, the contribution provided by the EM model is still more valuable in comparison to the CM model, where 31% of the change-prone classes are identified by using only the evolution metrics. An interesting example of a change-prone class correctly classified by EM and missed by CM is present in the ARGOUML project. During the time period between October and December 2006, the class ui.ProjectBrowser underwent 19 changes, while it has been introduced at the beginning of the project. Even though the structural metrics do not indicate issues in the maintainability of the class (i.e., LCOM=6, CBO=2, DIT=2, RFC=4), it tends to change frequently, being an actual class to keep under control. In this case, the CM model does not recognize the change-proneness of the class, while the evolution metrics are able to better characterize its future maintainability. Conversely, an example of a class identified by CM and missed by EM in the same ARGOUML project is generator.GeneratorJava. This class has been introduced during the time window between March and May 2006 (i.e., in the middle of the observed history), where it underwent 10 changes. Since the class has not been introduced in the early stages of software development, the EM model was not able to correctly mark this class as change-prone. On the other hand, the class contains a well-known design issues, i.e., it is affected by a Complex Class code smell. Thus, the code metrics are particularly high (e.g., LCOM=49) and effective in capturing the change-proneness of the class.
All in all, the analyses conducted show that the problem of change prediction cannot be solved by only relying on a subset of metrics considered. More importantly, different models are able to capture different change-prone classes: from a practical point of view, this means that the investigated developer-based metrics can nicely complement evolution metrics, possibly providing additional performance improvements when combined. At the same time, the CM model can provide further insights, being able to correctly recognize the change-proneness of a good portion of classes missed by other models (e.g., CM identified 22% of classes that the EM model was not able to identify).
**Summary for RQ3.** All the investigated models show nice complementarities, being able to correctly capture the change-proneness of the different classes. As a consequence, our findings reveal the possibility to achieve better performance when considering a combination of the predictors considered in this study.
### V. Threats of Validity
This section describes the threats that can affect the validity of our study.
**Construct Validity.** Threats to construct validity concern the relationship between theory and observation. We exploited the guidelines provided by Romano et al. [33] in order to build a golden set reporting the actual change-prone classes present in each of the analyzed time windows. This strategy has been widely used in the past to assess the change-proneness of classes [3], [17], [34], and it is recognized as a efficient way to distinguish change and non-change prone classes [33].
**Internal Validity.** A factor that possibly could have affected the variables investigated regards the evaluation procedure we
exploited to test the different prediction models. In particular, since we had the need to exploit change history information to compute the metrics composing the experimented developer-based models, the evaluation design adopted in our study is different from the ten-fold validation [53] generally exploited in the context of change prediction. In particular, we split the change history of the object systems into three-month time periods and we adopted a three-month sliding window to train and test the experimented fault prediction models. This type of validation is typically adopted when using process metrics as predictors [18], although it might be penalizing when using code metrics, which are typically assessed using a ten-fold cross validation.
Another threat is related to the use of developer-based and evolution metrics as predictors of the change-proneness of classes. Indeed, they somehow encapsulate the concept of change, possibly producing an “interplay” between independent and dependent variables of a prediction model. While the model proposed by Elish et al. [17] directly uses the number of changes a class underwent by a class in a previous time window as predictor of the future change-proneness of that class, we carefully verified whether this possible interplay produced unreliable results, finding that the usefulness of the model is limited to the cases where a class has a central role in the system. As for the BCCM, DCBM, and DM models, it is important to note that all of them rely on metrics able to capture the complexity of the development process under different perspectives (e.g., the number of developers who worked on a code component). Thus, they provide a higher abstraction level and do not directly measure the change-proneness of a class.
**Conclusion Validity.** Threats to conclusion validity refer to the relation between treatment and outcome. In order to evaluate the change prediction models we used metrics such as accuracy, precision, recall, F-Measure, and AUC-ROC, which are widely used in the evaluation of the performances of prediction models. Moreover, we also applied appropriate statistical procedures, i.e., the Wilcoxon [50] and the Cliff’s tests [51], to understand whether the differences in the performance of the experimented models were significant.
**External Validity.** As for the generalizability of the results, we analyzed ten different systems from different application domains and having different characteristics (size, number of classes, etc.). However, we are aware that our study is based on systems developed in Java only, and therefore future investigations aimed at corroborating our findings on a different set of systems would be worthwhile.
**VI. Conclusion**
Predicting the classes more likely to change in the future is an effective way to focus preventive maintenance activities on specific parts of a software system. While several researchers relied on code or evolution metrics to build change prediction models, little knowledge is available on the actual usefulness of developer-related factors in this context. This paper aimed at bridging this gap, by providing an empirical analysis of the performance achieved by three developer-based change prediction models on a set of ten software systems. Specifically, the contributions made by this paper are:
1) **An empirical investigation into the role of developer-related factors in change prediction.** To this aim, we analyzed the performance attained by three prediction models relying on metrics able to capture the complexity of the development process under different perspectives [19], [18], [20].
2) **A comparison between developer-based and state-of-the-art change prediction models.** We compared the prediction capabilities of developer-based models with two baseline approaches, i.e., the Evolution Model [17] and the Code Metric model [3].
3) **An analysis of the complementarity between the investigated models.** We evaluated the orthogonality of the different experimented models by computing overlap metrics and providing qualitative examples to understand under which situations a given model performs better than others.
The achieved results provide several findings:
- Developer-based change prediction models generally show good performance. Among them, the DCBM proposed by Di Nucci et al. [19] shows the best performance, reaching an overall F-Measure of 66% and an accuracy equals to 78%.
- Developer-based change prediction models work better than a model built using code metrics. In particular, when developers apply focused modifications in a given time period they are able to keep the complexity of the source code under control even in the cases where the code metrics highlight design issues.
- The studied models show interesting complementarities, indicating that different metrics are suitable for predicting the change-proneness of different classes.
Our observation of complementarity of models using different sources of information is our main input for future research in this field. Indeed, we plan to define a change prediction model that efficiently combines different sources of information. We also plan to corroborate our results on a larger set of software systems. Finally, a very important next step that we envision is to perform an extensive analysis of a wide range of maintainability problems and how they are impacted by developer-related factors. Part of this analysis is to study the relationship between these developer-related factors and the interplay between change-proneness and fault-proneness.
**References**
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/32869374/catolinoICPC2017.pdf", "len_cl100k_base": 10614, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38198, "total-output-tokens": 12153, "length": "2e13", "weborganizer": {"__label__adult": 0.0003733634948730469, "__label__art_design": 0.00024509429931640625, "__label__crime_law": 0.0003185272216796875, "__label__education_jobs": 0.0008912086486816406, "__label__entertainment": 4.035234451293945e-05, "__label__fashion_beauty": 0.00014865398406982422, "__label__finance_business": 0.00016057491302490234, "__label__food_dining": 0.0002651214599609375, "__label__games": 0.0004117488861083984, "__label__hardware": 0.0004982948303222656, "__label__health": 0.0003769397735595703, "__label__history": 0.00014781951904296875, "__label__home_hobbies": 6.41942024230957e-05, "__label__industrial": 0.0002319812774658203, "__label__literature": 0.00022292137145996096, "__label__politics": 0.00019943714141845703, "__label__religion": 0.0003635883331298828, "__label__science_tech": 0.0030231475830078125, "__label__social_life": 9.435415267944336e-05, "__label__software": 0.0035610198974609375, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002512931823730469, "__label__transportation": 0.00032830238342285156, "__label__travel": 0.00016188621520996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50025, 0.03069]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50025, 0.27538]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50025, 0.92061]], "google_gemma-3-12b-it_contains_pii": [[0, 1276, false], [1276, 7048, null], [7048, 13447, null], [13447, 19617, null], [19617, 21508, null], [21508, 27127, null], [27127, 32211, null], [32211, 38443, null], [38443, 43731, null], [43731, 50025, null], [50025, 50025, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1276, true], [1276, 7048, null], [7048, 13447, null], [13447, 19617, null], [19617, 21508, null], [21508, 27127, null], [27127, 32211, null], [32211, 38443, null], [38443, 43731, null], [43731, 50025, null], [50025, 50025, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50025, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50025, null]], "pdf_page_numbers": [[0, 1276, 1], [1276, 7048, 2], [7048, 13447, 3], [13447, 19617, 4], [19617, 21508, 5], [21508, 27127, 6], [27127, 32211, 7], [32211, 38443, 8], [38443, 43731, 9], [43731, 50025, 10], [50025, 50025, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50025, 0.10246]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
5c2220a3937eb259222e3eb6e194b2e6dfb672e8
|
# Contents
1 Introduction ......................................................................................................................... 2
2 Product Contents .................................................................................................................. 3
2.1 Additional Information for Intel-provided Debug Solutions .............................................. 4
2.2 Additional Information for Microsoft Visual Studio Shell* for Intel® Visual Fortran ...... 5
2.3 Intel® Software Manager ................................................................................................... 5
3 What's New ............................................................................................................................... 5
3.1 Intel® Xeon Phi™ Product Family Updates ......................................................................... 10
4 System Requirements ............................................................................................................ 11
4.1 Processor Requirements .................................................................................................... 11
4.2 Disk Space Requirements ................................................................................................... 11
4.3 Operating System Requirements ....................................................................................... 11
4.4 Memory Requirements ....................................................................................................... 12
4.5 Additional Software Requirements ................................................................................... 12
5 Installation Notes .................................................................................................................. 12
5.1 Installation on macOS* ....................................................................................................... 12
5.2 Some Features Require Installing as Root .......................................................................... 13
5.3 Online Installation ............................................................................................................. 13
5.4 Silent Install ....................................................................................................................... 13
5.5 Using a License Server ..................................................................................................... 14
6 Documentation ....................................................................................................................... 14
7 Issues and Limitations .......................................................................................................... 14
8 Technical Support .................................................................................................................. 15
9 Attributions for Intel® Math Kernel Library ......................................................................... 16
10 Legal Information ................................................................................................................. 17
1 Introduction
On completing the Intel® Parallel Studio XE installation process, locate the getstart*.htm file in the documentation_2019/en/ps2019 folder under the target installation path. This file is a documentation map to navigate to various information resources of Intel® Parallel Studio XE.
When you install Intel® Parallel Studio XE, we collect information that helps us understand your installation status and environment. Information collected is anonymous and is not shared outside of Intel. See https://software.intel.com/en-us/articles/data-collection for more information on what is collected and how to opt-out.
## 2 Product Contents
The following table shows which Intel® Software Development Tools are present in each edition of Intel® Parallel Studio XE 2019.
<table>
<thead>
<tr>
<th>Tool</th>
<th>Composer Edition(^1)</th>
<th>Professional Edition</th>
<th>Cluster Edition</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intel® C++ Compiler</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Fortran Compiler / Intel® Visual Fortran</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Distribution for Python*</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Integrated Performance Primitives (Intel® IPP)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Math Kernel Library (Intel® MKL)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Data Analytics Acceleration Library (Intel® DAAL)(^2)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Threading Building Blocks (Intel® TBB)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel-provided Debug Solutions</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Microsoft Visual Studio Shell* for Intel® Visual Fortran (for Windows* OS only)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Intel® Advisor</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Intel® Inspector</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Intel® VTune™ Amplifier</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Intel® Cluster Checker (For Linux* OS only)</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Intel® MPI Benchmarks</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Intel® MPI Library</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Intel® Trace Analyzer and Collector</td>
<td>X</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
\(^1\) Intel® Parallel Studio XE is only available in Composer Edition for macOS*.
\(^2\) Intel® Integrated Performance Primitives, Intel® Data Analytics Acceleration Library, and Intel® Threading Building Blocks are not included in Fortran language only editions.
The table below lists the product tools and related documentation.
<table>
<thead>
<tr>
<th>Tool</th>
<th>Version</th>
<th>Documentation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intel® Advisor</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® C++ Compiler</td>
<td>19.0 Update 2</td>
<td>get_started_wc.htm for Windows* OS</td>
</tr>
<tr>
<td></td>
<td></td>
<td>get_started_lc.htm for Linux* OS</td>
</tr>
<tr>
<td></td>
<td></td>
<td>get_started_mc.htm for macOS*</td>
</tr>
<tr>
<td>Intel® Cluster Checker <em>(For Linux</em> OS only)*</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® Data Analytics Acceleration Library (Intel® DAAL)</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® Distribution for Python*</td>
<td>2019 Update 2</td>
<td></td>
</tr>
<tr>
<td>Intel® Fortran Compiler / Intel® Visual Fortran Compiler</td>
<td>19.0 Update 2</td>
<td>get_started_wf.htm for Windows* OS</td>
</tr>
<tr>
<td></td>
<td></td>
<td>get_started_lf.htm for Linux* OS</td>
</tr>
<tr>
<td></td>
<td></td>
<td>get_started_mf.htm for macOS*</td>
</tr>
<tr>
<td>Intel® Inspector</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® Integrated Performance Primitives (Intel® IPP)</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® Math Kernel Library (Intel® MKL)</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® MPI Benchmarks</td>
<td>2019 Update 1</td>
<td>ReadMe_IMB.txt</td>
</tr>
<tr>
<td></td>
<td></td>
<td>IMB_Users_Guide.htm</td>
</tr>
<tr>
<td>Intel® MPI Library</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® Threading Building Blocks (Intel® TBB)</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® Trace Analyzer and Collector</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel® VTune™ Amplifier</td>
<td>2019 Update 2</td>
<td>get_started.htm</td>
</tr>
<tr>
<td>Intel-provided Debug Solutions</td>
<td></td>
<td>See below for additional information.</td>
</tr>
<tr>
<td>Microsoft Visual Studio Shell* for Intel® Visual Fortran <em>(For Windows</em> OS; installs only on the master node)*</td>
<td>2019 Update 2</td>
<td>See below for additional information.</td>
</tr>
</tbody>
</table>
2.1 Additional Information for Intel-provided Debug Solutions
2.2 Additional Information for Microsoft Visual Studio Shell* for Intel® Visual Fortran
A Fortran-only Integrated Development Environment (IDE) based on Microsoft Visual Studio Shell 2015* is provided for systems that do not have a supported Microsoft Visual Studio installed. Installation of the Fortran IDE has the following additional requirements:
- Microsoft Windows 7 SP1* or newer, or Microsoft Windows Server 2012* or newer operating system
- On Windows 8.1* and Windows Server 2012 R2*, KB2883200 is required
- Microsoft Windows 10 SDK*
2.2.1 Microsoft Visual Studio Shell Deprecation
Microsoft* has announced the stand-alone Microsoft Visual Studio Shell* will not be available for Visual Studio 2017. As such, starting with Intel® Parallel Studio XE 2019 U3 (all editions), we will no longer be providing a standalone shell. An integrated shell is available as part of the full Microsoft Visual Studio bundle. Please refer to https://visualstudio.microsoft.com/vs/ for further information on the Microsoft Visual Studio product offerings.
2.3 Intel® Software Manager
On Windows* OS only, the installation provides an Intel® Software Manager to provide a simplified delivery mechanism for product updates and provide current license status and news on all installed Intel® Software Development Products.
Intel® Software Manager has been removed from the Linux* and macOS* versions of Intel® Parallel Studio XE.
3 What's New
This section highlights important changes from previous product versions. For more information on what is new in each tool, see the individual tool release notes. Documentation for all tools is online at https://software.intel.com/en-us/intel-software-technical-documentation.
Changes in Intel® Parallel Studio XE 2019 Update 2:
- All tools updated to the latest version.
- Intel® Parallel Studio XE 2019 Update 2 includes functional and security updates. Users should update to the latest version.
- Support for the following operating systems is being deprecated:
- Red Hat Enterprise Linux* 6
- Ubuntu* LTS 14.04, 18.10
- Fedora* 27, 28, 29
- SUSE Linux Enterprise Server* 11
- Debian* 8
- Microsoft Windows* 7, Server 2012
- macOS* 10.13
• Support for the following IDEs is being deprecated:
o Microsoft Visual Studio* 2013, 2015
o Xcode* 9.x
• Support for the Microsoft Visual Studio Shell* is being deprecated.
Changes in Intel® Parallel Studio XE 2019 Update 1:
• All tools updated to the latest version.
• Japanese localization support.
• Removed 32-bit content for macOS*.
• Intel® Advisor:
o Added ability to switch between "all integer operations" and "pure compute integer operations" in the Survey Grid column settings.
o Added ability to export Integer and INT+FLOAT operations Roofline HTML report via the command line interface.
o Added ability in the Integrated Roofline preview to select the mode of memory-related metrics by cache level and memory operations type in the Survey Grid column settings.
• Intel® Data Analytics Acceleration Library:
o Added support for Apache Maven*.
o Introduced support for MT2203 random number generators decision forest API changes.
o The LBFGS algorithm now supports automatic step-length selection on each iteration of this algorithm.
• Intel® Distribution for Python:
o Added new method for installing and upgrading.
o Introduced a new high level Python* API for Intel® DAAL (daal4py) replacing pydaal. PyDAAL support will be deprecated in the Intel® Parallel Studio XE 2021 release.
o Added access to Intel® MKL runtime settings through an easy-to-use Python control package (mkl-service).
• Intel® Inspector:
o Bug fixes.
• Intel® Integrated Performance Primitives:
o Added Custom Library Tool for Python*.
o Optimized ippsFIRMR32f_32fc for Intel® Advanced Vector Extensions 2 and Intel® Advanced Vector Extensions 512.
o Added example of pipeline in Intel® IPP TL.
• Intel® Math Kernel Library:
o Introduced Universal Windows* Driver support.
o Improved performance of specific BLAS, LAPACK, and FFT functions.
• Intel® MPI Library:
o Improved performance.
o Added I_MPI_* environment variables spell checker.
Customized libfabric-1.7.0 alpha sources and binaries are updated, internal OFI is now used by default.
- Intel® Threading Building Blocks:
- Doxygen documentation can now be built with the 'make doxygen' command.
- Enforced 8 byte alignment for tbb::atomic<long long> and tbb::atomic<double>.
- Added constructors with HashCompare argument to concurrent_hash_map.
- Intel® Trace Analyzer and Collector:
- Bug fixes.
- Intel® VTune™ Amplifier:
- Extended threading analysis with the lower overhead hardware event-based sampling mode.
- Added metrics and Top 5 Hotspots table to Hotspots command line report.
- Added a sample matrix project to the Project Navigator.
Changes in Intel® Parallel Studio XE 2019:
- All tools updated to the latest version.
- Intel® Distribution for Python* integrated into Intel® Parallel Studio XE.
- Added support for Conda packaging.
- Installation statistics are GDPR compliant.
- Added native method to elevate privileges on Linux* and macOS*.
- Added support for tbb4py
- Added support for Xcode* 9.4 on macOS*.
- Deprecated support for Microsoft Windows® 7.
- Deprecated support for Microsoft Visual Studio® 2013.
- Removed support for IA-32 targets in macOS*.
- Added required digital certificates on Microsoft Windows*.
- Updated Intel® Parallel Studio XE Getting Started documentation format and structure.
- Removed Intel® Xeon Phi™ related components.
- Removed support for Intel® Graphics Technology compiler.
- Removed Intel® Debugger for Heterogeneous Compute.
- Added support for GDB 8.0.1 in Intel® C/C++ Compiler and Intel® Fortran Compiler.
- Intel® Advisor:
- Preview feature: Integrated Roofline showing which exact memory layer is the bottleneck for each loop.
- Added Advisor macOS* interface to view and analyze data collected on Linux* or Microsoft Windows*.
- Flow Graph Analyzer: New rapid visual prototyping environment to interactively build, validate, and visualize algorithms.
- Intel® C/C++ Compiler:
- The option openmp-simd is now set by default.
- Added support for exclusive scan SIMD and user-defined induction for OpenMP* parallel pragmas.
- Added support for more C++17 features.
- **Intel® Cluster Checker:**
- New output format with overall summary and extended output containing simplified scheme to assess issues.
- Simplified execution of Intel® Cluster Checker with a single command.
- Added auto-node discovery when using Slurm®.
- **Intel® Data Analytics Acceleration Library:**
- Enabled support for user-defined data modification procedure in CSV and ODBC data sources.
- **Intel® Distribution for Python®:**
- Intel® Distribution for Python® now integrated into Intel® Parallel Studio XE 2019 installer. Also available as an easy command line standalone install.
- Faster machine learning with Scikit-learn: Support Vector Machine (SVM) and K-means prediction accelerated with Intel® Data Analytics Acceleration Library.
- Introduced new XGBoost package with Python* interface to the library (available on Linux® only).
- **Intel® Fortran Compiler:**
- Added support for Microsoft Visual Studio* 2017 Build Tools.
- The option openmp-simd is now set by default.
- Added support for more Fortran 2018 features.
- **Intel® Inspector:**
- Introduced Intel® Inspector – Persistence Inspector feature.
- Added analysis of potential deadlocks on Read-Write locks.
- Deprecated support for Microsoft .NET® software.
- **Intel® Integrated Performance Primitives:**
- Extended optimization for CLX, CNL in some functions.
- Initial optimizations for ICX, ICL of Crypto functionality.
- Developed patch and required API to support ZFP Data Compression.
- **Intel® Math Kernel Library:**
- Aligned Intel® Math Kernel Library LAPACK functionality with Netlib® LAPACK 3.7.1 and 3.8.0.
- Significantly (up to 2.5x) reduced memory footprint of ScaLAPACK Eigensolvers P?SY|HE|EV[D|X|R].
- Improved performance of multiple routines.
- **Intel® MPI Library:**
- Added Intel® Omni-Path Architecture PSM2 Multiple-Endpoints (Multi-EP) support.
- Consolidated all network interfaces into OFI.
- Added new impi_info utility.
- **Intel® Threading Building Blocks:**
- More algorithms in Parallel STL support parallel and/or vector execution policies.
- Binaries for Universal Windows Driver (vc14_uwd) now link with static Microsoft® runtime libraries, and are only available in commercial releases.
Fixed static_partitioner to assign tasks properly in case of nested parallelism.
- Intel® Trace Analyzer and Collector:
- Removed support of Intel® Trace Collector static libraries on Windows*.
- GDPR compliance bug fix in installer.
- Intel® VTune™ Amplifier:
- Introduced Intel® VTune™ Amplifier Platform Profiler tool for low overhead system-wide analysis and insights.
- Improved workflow for analysis types and configuration.
- Input and Output analysis on Linux* extended to profile DPDK and SPDK IO API.
Changes in Intel® Parallel Studio XE 2018 Update 3:
- All components updated to current versions.
- Intel® Advisor
- Enhanced roofline analysis usability.
- Added ability to stop MAP analysis by condition to reduce collection overhead.
- Added ability to specify a number of top hot innermost loops in batch mode.
- Intel® C/C++ Compiler:
- Added support for parallel and/or vector execution policies in more algorithms.
- Added specialization of parallel_transform_scan pattern for better performance with floating point types.
- Intel® Math Kernel Library:
- Improved performance for small problem sizes in certain routines.
- Improved performance of LAPACK inverse routines.
- Added optimizations in certain routines for Intel® Advanced Vector Extensions 2 and 512 (Intel® AVX2 and Intel® AVX-512).
- Intel® Threading Building Blocks:
- Improved support for Flow Graph Analyzer and Intel® VTune™ Amplifier in the task scheduler and generic parallel algorithms.
- Default device set for opencl_node now includes all the devices from the first available OpenCL* platform.
- Added template class blocked_rangeNd for a generic multi-dimensional range (requires C++11).
- Intel® VTune™ Amplifier:
- Added support for SUSE* Linux* Enterprise Server 12 SP3, Red Hat Enterprise Linux* 7 Update 5, Ubuntu* 18.04, and Microsoft Windows* 10 RS4 (user-mode sampling and tracing collection only).
Changes in Intel® Parallel Studio XE 2018 Update 2:
- All components updated to current versions.
- Added support for Xcode 9.2.
- Intel® Advisor:
- Improved recommendations: new navigation, parameters for peel/remainder recommendations, and more.
Roofline chart improvements: benchmarks on 1 MPI rank per node, guidance on chart, recalculation of roofs for number of threads.
Refinement analysis improvements: analyze limited amount of loop iterations to reduce overhead, new footprint metric with precise analytics for loop's first iteration.
Intel® Data Analytics Acceleration Library:
- Host application interface has been added to DAAL. Example code is provided.
- Published experimental DAAL and DAAL extension library technical preview.
- Gradient boosted trees training algorithm has been extended with inexact splits calculation mode.
Intel® Integrated Performance Primitives:
- Extended optimization for Intel® AVX-512 and for Intel® SSE4.2 instruction set.
- Fixed a problem with incorrect code dispatching for some systems.
Intel® Inspector:
- Added support for Ubuntu 17.10 and Windows 10 RS3.
Intel® Math Kernel Library:
- Improved performance of BLAS level 3 functions and SGEMM/DGEMM on certain instruction sets.
- Introduced Intel® TBB support of triangular solvers and converters routines.
- Introduced new capabilities in Intel® Pardiso functionality.
Intel® MPI Library:
- Improved shm performance with collective operations.
- \_\_\_MPI\_SCHED\_YIELD and I\_\_\_MPI\_SCHED\_YIELD\_MT\_OPTIMIZATION are replaced by I\_\_\_MPI\_THREAD\_YIELD. See Intel® MPI Library documentation for values.
- Intel® MPI Library is available to install now in YUM and APT repositories.
Intel® Threading Building Blocks:
- Binaries for Universal Windows Driver (vc14_uwd) now link with static Microsoft* runtime libraries, and are only available in commercial releases.
- Extended flow graph documentation with more code samples.
Intel® Trace Analyzer and Collector:
- User interface improvements.
- Deprecated ITC static libraries on Windows.
Intel® VTune™ Amplifier:
- Preview of CPU/FPGA Interaction analysis for systems with a discrete Intel® Arria® 10 FPGA.
- HPC workload profiling improvements.
- Managed runtime analysis improvements.
### 3.1 Intel® Xeon Phi™ Product Family Updates
#### 3.1.1 Intel® Xeon Phi™ 7200 Coprocessor (codenamed Knights Landing coprocessor)
Intel continually evaluates the markets for our products in order to provide the best possible solutions to our customer’s challenges. As part of this on-going evaluation process Intel has
decided to not offer Intel® Xeon Phi™ 7200 Coprocessor (codenamed Knights Landing Coprocessor) products to the market.
- Given the rapid adoption of Intel® Xeon Phi™ 7200 processors, Intel has decided to not deploy the Knights Landing Coprocessor to the general market.
- Intel® Xeon Phi™ Processors remain a key element of our solution portfolio for providing customers the most compelling and competitive solutions possible.
3.1.2 Support for the Intel® Xeon Phi™ x100 product family coprocessor (formerly code name Knights Corner) is removed in this release
The Intel® Xeon Phi™ x100 product family coprocessor (former code name Knights Corner) was officially announced end of life in January 2017. As part of the end of life process, the support for this family will only be available in the Intel® Parallel Studio XE 2017 version. Intel® Parallel Studio XE 2017 will be supported for a period of 3 years ending in January 2020 for the Intel® Xeon Phi™ x100 product family. Support will be provided for those customers with active support.
4 System Requirements
4.1 Processor Requirements
Systems based on IA-32 architecture are supported as target platforms on Windows* and Linux*. Systems based on Intel® 64 architectures below are supported both as host and target platforms.
Systems based on Intel® 64 architecture:
- Intel® Core™ processor family or higher
- Intel® Xeon® E5 v5 processor families recommended
- Intel® Xeon® E7 v5 processor families recommended
NOTE: It is assumed that the processors listed above are configured into homogeneous clusters.
4.2 Disk Space Requirements
12 GB of disk space (minimum) on a standard installation. Cluster installations require an additional 4 GB of disk space.
NOTE: During the installation process, the installer may need up to 12 GB of additional temporary disk storage to manage the intermediate installation files.
4.3 Operating System Requirements
The operating systems listed below are supported by all tools on Intel® 64 Architecture. Individual tools may support additional operating systems and architecture configurations. See the individual tool release notes for full details.
- Debian* 8 (deprecated), 9
- Fedora* 27 (deprecated), 28 (deprecated)
- Red Hat Enterprise Linux* 6 (deprecated), 7 (equivalent CentOS versions supported, but not separately tested)
- SUSE Linux Enterprise Server* 12, 15
- Ubuntu* 16.04, 18.04
- Microsoft* Windows* 7 (deprecated), 10
- Microsoft* Windows* Server 2012 (deprecated), 2012 R2 (deprecated), 2016
- macOS* 10.13 (deprecated), 10.14
The Intel® MPI Library and Intel® Trace Analyzer and Collector are supported on Intel® Cluster Ready systems and HPC versions of the listed versions of Microsoft* Windows* Server. These tools are not supported on Ubuntu non-LTS systems.
Installation on IA-32 hosts is no longer supported by any tools.
### 4.4 Memory Requirements
2 GB RAM (minimum)
### 4.5 Additional Software Requirements
Development for a 32-bit target on a 64-bit host may require optional library components (ia32-libs, lib32gcc1, lib32stdc++6, libc6-dev-i386, gcc-multilib, g++-multilib) to be installed from your Linux distribution.
On Microsoft Windows* OS, the Intel® C/C++ Compiler and Intel® Visual Fortran Compiler require a version of Microsoft Visual Studio* to be installed. The following versions are currently supported:
- Microsoft Visual Studio* 2013 (deprecated), 2015 (deprecated), 2017
- Microsoft Visual Studio Express* (only for command line compilation)
On macOS*, the Intel® C/C++ Compiler and Intel® Fortran Compiler require a version of Xcode* to be installed. The following versions are currently supported:
- Xcode* 9 (deprecated), 10
### 5 Installation Notes
For instructions on installing and uninstalling the Intel® Parallel Studio XE see the Installation Guide for your operating system. These are available from the Intel® Software Development Products Registration Center page for Intel® Parallel Studio XE for your operating system. The installation of the product requires a valid license file or serial number.
#### 5.1 Installation on macOS*
If you will be using Xcode*, please make sure that a supported version of Xcode is installed. If you install a new version of Xcode in the future, you must reinstall Intel® Parallel Studio XE afterwards.
The **Command Line Tools** component, required for command-line development, is not installed by default. It can be installed using the Components tab of the Downloads preferences panel.
You will need to have administrative or “sudo” privileges to install, change or uninstall the product.
Follow the prompts to complete installation.
Note that there are several different downloadable files available, each providing different combinations of tools. Please read the download web page carefully to determine which file is appropriate for you.
You do not need to uninstall previous versions or updates before installing a newer version – the new version will coexist with the older versions.
### 5.2 Some Features Require Installing as Root
Most of Intel® VTune™ Amplifier profiling features work with a non-root install. Many work on either a genuine Intel processor or a compatible processor.
Some advanced features that use event-based sampling require the latest OS kernel or sampling driver to be installed. Intel® Atom™ processors also require this driver for analysis.
To install the driver on a system with a genuine Intel processor, launch the installer as root or ask your system administrator to install the driver later. For information on building and setting up the drivers, see [https://software.intel.com/en-us/sep_driver](https://software.intel.com/en-us/sep_driver).
### 5.3 Online Installation
The electronic installation package for Intel® Parallel Studio XE now offers as an alternative a smaller installation package that dynamically downloads and then installs packages selected to be installed. This requires a working internet connection and potentially a proxy setting if you are behind an internet proxy. Full packages are provided alongside where you download this online install package if a working internet connection is not available. The online installer may be downloaded and saved as an executable file which can then be launched from the command line.
### 5.4 Silent Install
For information on automated or “silent” install capability, please see [http://intel.ly/nKrzhv](http://intel.ly/nKrzhv).
#### 5.4.1 Support of Non-Interactive Custom Installation
Intel® Parallel Studio XE supports the saving of user install choices during an ‘interactive’ install in a configuration file that can then be used for silent installs. This configuration file is created when the following option is used from the command line install:
• **--duplicate=config_file_name:** it specifies the configuration file name. If full path file name is specified, the "--download-dir" is ignored and the installable package will be created under the directory where configuration file is.
• **--download-dir=dir_name:** optional, it specifies where the configuration file will be created. If this option is omitted, the installation package and the configuration file will be created under the default download directory:
- Windows: %Program Files%\Intel\Download\<package_id>
- Linux: /tmp/<UID>/<package_id>
- macOS: /Volumes/<package_id>/<package_id>.app/Contents/MacOS/
**For example:** parallel_studio_xe_<version>_setup.exe --
duplicate=ic16_install_config.ini --download-dir=
"C:\temp\custom_pkg_ic16"
The configuration file and installable package will be created under
"C:\temp\custom_pkg_ic16".
### 5.5 Using a License Server
If you have purchased a "floating" license, see [http://intel.ly/pjGfwC](http://intel.ly/pjGfwC) for information on how to install using a license file or license server. This article also provides a source for the Intel® License Server that can be installed on any of a wide variety of systems.
### 6 Documentation
The documentation index file getstart*.htm provides more information about Intel® Parallel Studio XE.
Note: Some hyperlinks in HTML documents may not work when you use Internet Explorer*. Try using another browser, such as Chrome* or Firefox*, or right-click the link, select **Copy shortcut**, and paste the link into a new Internet Explorer* window.
### 7 Issues and Limitations
2. There have been situations where during the installation process, /tmp has been filled up. We recommend that you have **at least 12 GB of free space** in /tmp when installing the Intel® Parallel Studio XE. Also, the installer script install.sh has the command-line options:
```bash
-t [FOLDER]
```
or
```
--tmp-dir [FOLDER]
```
where `[FOLDER]` is a directory path, which can direct the use of intermediate storage to another disk partition referenced by `[FOLDER]`. `[FOLDER]` should be a non-shared storage location on each node of the cluster. Note that `[FOLDER]` should also contain at least 12 GB of free space.
3. On Linux® OS, if any software tool of the Intel® Parallel Studio XE is detected as pre-installed on the head node, that software tool will not be processed by the installer. There is a similar problem on Windows® OS in the 'Modify' mode. For Windows® OS, if some software tool of the Intel® Parallel Studio XE is pre-installed on the head node using the installer, that software tool will not be installed on the compute nodes of the cluster. For either Linux® OS or Windows® OS, if you already installed some of the software tools only on the head node, and you want to install them on the other nodes using the installer, you need to uninstall such tools from the head node manually before starting the installer.
4. Intel® Parallel Studio XE for Windows® OS requires the creation and use of symbolic links for installation of the Intel® software product tools. If you have a File Allocation Table (FAT32) file system deployed on your Windows® OS platform, these symbolic links cannot be created and the integrity of the Intel® Parallel Studio XE installation is compromised.
5. In some situations, if a Windows OS computer has been updated but not restarted and the Visual Studio Shell is to be installed, Intel® Parallel Studio XE installation will fail with the error message “Intel(R) Parallel Studio XE 2019 Cluster Edition for Windows* Setup Wizard ended prematurely because of an error(s).” The failing module is vs_isoshell.exe. To work around this issue, restart your computer and repeat the installation process.
8 Technical Support
Your feedback is very important to us. To receive technical support for the tools provided in this product and technical information including FAQ’s and product updates, you are encouraged to register your product at the Intel® Software Development Products Registration Center.
NOTE: Registering for support varies for release product or pre-release products (alpha, beta, etc.) – only released software products have support web pages at http://software.intel.com/sites/support/.
To register for an account, please visit the Intel® Software Development Products Registration Center website at http://www.intel.com/software/products/registrationcenter/index.htm. If you have forgotten your password, please follow the instructions on the login page for forgotten password.
Each purchase of Intel® Parallel Studio XE includes a year of support services, which includes priority support at Online Service Center. For more information on Online Service Center please see http://software.intel.com/en-us/support/online-service-center. When submitting a support request, please select the appropriate tool unless your request is related to the entire suite.
9 Attributions for Intel® Math Kernel Library
As referenced in the End User License Agreement, attribution requires, at a minimum, prominently displaying the full Intel product name (e.g. "Intel® Math Kernel Library") and providing a link/URL to the Intel® MKL homepage (http://www.intel.com/software/products/mkl) in both the product documentation and website.
The original versions of the BLAS from which that part of Intel® MKL was derived can be obtained from http://www.netlib.org/blas/index.html.
The original versions of LAPACK from which that part of Intel® MKL was derived can be obtained from http://www.netlib.org/lapack/index.html. The authors of LAPACK are E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. Our FORTRAN 90/95 interfaces to LAPACK are similar to those in the LAPACK95 package at http://www.netlib.org/lapack95/index.html. All interfaces are provided for pure procedures.
The original versions of ScaLAPACK from which that part of Intel® MKL was derived can be obtained from http://www.netlib.org/scalapack/index.html. The authors of ScaLAPACK are L. S. Blackford, J. Choi, A. Cleary, E. D'Azvedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley.
The Intel® MKL Extended Eigensolver functionality is based on the Feast Eigenvalue Solver 2.0 http://www.ecs.umass.edu/~polizzi/feast/.
PARDISO in Intel® MKL is compliant with the 3.2 release of PARDISO that is freely distributed by the University of Basel. It can be obtained at http://www.pardiso-project.org.
Some FFT functions in this release of Intel® MKL have been generated by the SPIRAL software generation system (http://www.spiral.net/) under license from Carnegie Mellon University. The Authors of SPIRAL are Markus Puschel, Jose Moura, Jeremy Johnson, David Padua, Manuela Veloso, Bryan Singer, Jianxin Xiong, Franz Franchetti, Aca Gacic, Yevgen Voronenko, Kang Chen, Robert W. Johnson, and Nick Rizzolo.
10 Legal Information
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
*Other names and brands may be claimed as the property of others.
Microsoft, Windows, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.
Java is a registered trademark of Oracle and/or its affiliates.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.
Copyright (C) 2011-2019, Intel Corporation. All rights reserved.
This software and the related documents are Intel copyrighted materials, and your use of them is governed by the express license under which they were provided to you (License). Unless the License provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the related documents without Intel's prior written permission.
This software and the related documents are provided as is, with no express or implied warranties, other than those that are expressly stated in the License.
**Optimization Notice**
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804
|
{"Source-Url": "https://jp.xlsoft.com/documents/intel/parallel/2019/PSXE2019_Update2_Release_Notes.pdf", "len_cl100k_base": 8474, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 46632, "total-output-tokens": 9890, "length": "2e13", "weborganizer": {"__label__adult": 0.0004808902740478515, "__label__art_design": 0.0006356239318847656, "__label__crime_law": 0.0003447532653808594, "__label__education_jobs": 0.0004925727844238281, "__label__entertainment": 0.00013196468353271484, "__label__fashion_beauty": 0.00024211406707763672, "__label__finance_business": 0.0010271072387695312, "__label__food_dining": 0.0003612041473388672, "__label__games": 0.0015668869018554688, "__label__hardware": 0.02734375, "__label__health": 0.00035572052001953125, "__label__history": 0.00019431114196777344, "__label__home_hobbies": 0.0002002716064453125, "__label__industrial": 0.0016326904296875, "__label__literature": 0.00023925304412841797, "__label__politics": 0.00019609928131103516, "__label__religion": 0.0006113052368164062, "__label__science_tech": 0.03265380859375, "__label__social_life": 4.601478576660156e-05, "__label__software": 0.04931640625, "__label__software_dev": 0.880859375, "__label__sports_fitness": 0.00035691261291503906, "__label__transportation": 0.0005755424499511719, "__label__travel": 0.0001760721206665039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42885, 0.02424]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42885, 0.04635]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42885, 0.77489]], "google_gemma-3-12b-it_contains_pii": [[0, 3154, false], [3154, 4870, null], [4870, 7446, null], [7446, 11145, null], [11145, 13343, null], [13343, 15309, null], [15309, 17511, null], [17511, 19817, null], [19817, 22007, null], [22007, 24337, null], [24337, 26520, null], [26520, 28649, null], [28649, 31118, null], [31118, 33438, null], [33438, 35796, null], [35796, 38525, null], [38525, 41493, null], [41493, 42885, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3154, true], [3154, 4870, null], [4870, 7446, null], [7446, 11145, null], [11145, 13343, null], [13343, 15309, null], [15309, 17511, null], [17511, 19817, null], [19817, 22007, null], [22007, 24337, null], [24337, 26520, null], [26520, 28649, null], [28649, 31118, null], [31118, 33438, null], [33438, 35796, null], [35796, 38525, null], [38525, 41493, null], [41493, 42885, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42885, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42885, null]], "pdf_page_numbers": [[0, 3154, 1], [3154, 4870, 2], [4870, 7446, 3], [7446, 11145, 4], [11145, 13343, 5], [13343, 15309, 6], [15309, 17511, 7], [17511, 19817, 8], [19817, 22007, 9], [22007, 24337, 10], [24337, 26520, 11], [26520, 28649, 12], [28649, 31118, 13], [31118, 33438, 14], [33438, 35796, 15], [35796, 38525, 16], [38525, 41493, 17], [41493, 42885, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42885, 0.10622]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
719400cd02e65fa5083e1805ff3f29062afe8d34
|
METHODS AND SYSTEM FOR PROGRAM EXECUTION INTEGRITY MEASUREMENT
Inventors: Perry W. Wilson, Laurel, MD (US); J. Aaron Pendergrass, Silver Spring, MD (US); C. Durward McDonell, III, Olney, MD (US); Peter A. Loscoceo, Glenwood, MD (US); David J. Heine, Columbia, MD (US); Bessee Y. Lewis, Owings Mills, MD (US)
Assignee: The Johns Hopkins University, Baltimore, MD (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 613 days.
Appl. No.: 11/743,284
Filed: May 2, 2007
Prior Publication Data
Related U.S. Application Data
Provisional application No. 60/796,694, filed on May 2, 2006.
Int. Cl.
G06F 19/00 (2006.01)
U.S. Cl. 702/186; 702/123; 717/124
Field of Classification Search 702/108, 702/123, 127, 186; 714/37, 38; 726/22; 717/124, 126
See application file for complete search history.
ABSTRACT
The present disclosure is directed towards methods and systems for measuring the integrity of an operating system's execution and ensuring that the system's code is performing its intended functionality. This includes examining the integrity of the code that the operating system is executing as well as the data that the operating system accesses. Integrity violations can be detected in the dynamic portions of the code being executed.
4 Claims, 8 Drawing Sheets
FIGURE 2
FILE DATA
- struct list_head f_list
- struct dentry *f_dentry
- struct vfsmount *f_vfsmnt
- struct file_operations *f_op
- atomic_t f_count
- unsigned int f_flags
- ...
- ...
- long f_iobuf_lock
FILE OPERATIONS DATA
- struct module *owner
- lseek (file, offset, origin)
- read (file, buf, count, offset)
- write (file, buf, count, offset)
- readdir (dir, dirent, filldir)
- poll (file, poll_table)
- ioctl (inode, file, cmd, arg)
- mmap (file, vma)
- open (inode, file)
- flush (file)
- release (inode, file)
- ...
- ...
- ...
- get_unmapped_area (file, addr, len, offset, flags)
FIGURE 4
FIGURE 6(a)
- Static Memory Regions
- Result: stext [c0102000]:
shal: 34fb210a340248c235c65778f94922620556464c
+ Result: cpu_gdt_table [c02f0280]:
+ Result: sys_call_table [c02bc48c]:
+ System Call Table inspection
- Virtual File System Inspection
+ Result: super_blocks [c02f72c4]:
+ Result: inode_in_use [c02f79e4]:
+ Block IO Inspection
- Binary File Formats
- Result: formats [c0373808]:
- Result: formats[0] [c02f8288]:
- Result: elf_format [c02f8288]:
load_elf_binary [c018218c]
load_elf_library [c0181e1a]
elf_core_dump [c018398a]
+ Result: formats[1] [c02f8270]:
+ Result: formats[2] [c02f8020]:
+ Networking
+ SELinux
FIGURE 6(b)
FIGURE 7(a)
FIGURE 7(b)
METHODS AND SYSTEM FOR PROGRAM EXECUTION INTEGRITY MEASUREMENT
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of prior filed, co-pending U.S. provisional application: Ser. No. 60/796,694, filed on May 2, 2006, which is incorporated herein by reference in its entirety.
STATEMENT OF GOVERNMENTAL INTEREST
This invention was made with Government support under Department of Defense contract DAAH04-02-D-0302. The Government has certain rights in the invention.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to integrity measurement and, more particularly, to methods and system to verify the integrity of a software-based computer system.
2. Description of the Related Art
The computer industry has shown increased interest in leveraging integrity measurements to gain more confidence in general-purpose computing platforms. The concern regarding this trend is that the approach to integrity measurement promoted by new security technologies has yet to sufficiently mature for realization of integrity measurement’s potential security impact.
In the general sense, measurement is a process of characterizing software. There are any number of ways in which the same piece of software could be characterized, each potentially resulting in a different measurement technique. The reasons for measuring a piece of software are varied, with some measurement techniques being more appropriate than others.
One common technique is hashing. A hash is computed over static regions of the software and used as the characterization. Although hashes are easily computed, stored, and used, hashing is by no means the only possible measurement technique. Existing measurement systems tend to rely on hashes of security-relevant objects such as the BIOS, the executable code of an operating system, or the contents of configuration files. Hashing is extremely effective as a measurement technique in certain circumstances. However, hashing does not always produce results that allow a complete determination of integrity.
A fundamental property of an Integrity Measurement System (IMS) is the use of measurement data as supporting evidence in decisions about the integrity of a target piece of software. An ability to produce accurate assessments of software integrity allows an IMS to contribute significantly to security in many scenarios. Without measurement techniques appropriate to the decision for a given scenario, an IMS cannot correctly determine integrity.
For example, to the user of a system, an IMS could help determine if the system is in a sufficiently safe state to adequately protect data. It could help determine the pedigree of the provider of a service or software, as well as the software itself. An Information Technology (IT) department could benefit from an IMS to help ensure that systems connected to its network are indeed in some approved configuration. To a service provider, an IMS enables decisions about granting a particular service to include statements about the integrity of the requesting system and/or application. In each of these scenarios, the reasons for needing an integrity decision, as well as the type of measurement data suitable for that decision might be different.
There are multiple ways in which an IMS architecture could be implemented, four of which are shown in FIG. 1. They share several common elements: a measurement agent (MA), a target of measurement (T), and a decision maker (DM). An MA collects measurement data about T using some appropriate measurement technique. The MA needs to have access to T’s resources and be able to hold the measurement data until needed. The DM acts as a validator or appraiser responsible for interpreting measurement data in support of integrity decisions. In an IMS that uses hashing for measurement, this component would likely be responsible for comparing hashes to known good values. Lastly, an IMS must have a means of presenting collected data to a DM. Depending on the implementation, this could be as simple as displaying measurements to a user or administrator, but more complex systems require protocols for communicating the authenticity and integrity of measurement data to the DM.
One common notion of an IMS has the MA and T co-resident on the user’s platform, while the DM runs on a separate machine controlled by the owner. Measurement data is transferred to the DM using an attestation protocol. However, it should be noted that many other possible layouts for an IMS are also appropriate.
When designing an IMS to meet the needs for any given scenario, how the above-mentioned components are integrated into the system, as well as properties of each of them, can greatly impact the effectiveness of the system’s ability to provide the quality of measurement data necessary for the DM to provide desired security benefits. These design choices also impact the ability of a given IMS to support multiple scenarios for the same platform. The design of an IMS tailored to a specific scenario is likely to differ greatly from one intended to serve a more general purpose. Considering an IMS in terms of these component pieces yields different dimensions by which an IMS can be evaluated.
The use of an IMS raises privacy concerns. Owners of measurement targets may be hesitant to release certain types of measurements to a DM for a variety of valid reasons. IMS component design impacts an IMS’s ability to adequately address privacy concerns.
Measurement deals with what might be described as expectedness. Eventually a decision will be needed to determine if the software relied upon for a critical function is indeed the expected version that was previously determined to be trustworthy to perform the function and is either in a known good state or perhaps not in a known bad state. A suitable measurement process must produce data sufficient for an IMS to make this determination.
In order to assess the sufficiency of any measurement process, the measurement data’s intended purpose must be understood. A technique deemed sufficient for one measurement scenario might prove completely inadequate for another. An IMS’s DM and how it relies on integrity evidence for security will ultimately determine if a given measurement technique is suitable.
Integrity measurements are evidence to be used in decisions relevant to the execution of a piece of software or some other software that depends on it. These decisions require an assessment of software state and perhaps its environment. Since any such decision’s validity will rely on the quality of the evidence, where quality is reflected in terms of how accurately the measurement data characterizes those portions of the software relevant to the pending decision, it is useful to consider integrity measurement techniques based on their
potential to completely characterize a target irrespective of scenario or system. Techniques with greater potential for complete characterization should be considered better suited for decision processes requiring a true measure of integrity.
Besides understanding a measurement process' ability to characterize the target, there are other characteristics of an IMS’s MA useful for examining its sufficiency for producing adequate evidence of a target’s expectedness. Among them are a MA’s ability to produce all evidence required by the IMS’s DM and to reflect in that evidence the current state of the potentially executing target.
In order to discuss integrity measurement systems, it is necessary to have a common measurement vocabulary. With a suitable vocabulary, it becomes possible to assess and compare measurement techniques to determine their suitability in a given IMS for particular measurement scenarios. It would also be useful for describing how the different components of an IMS have been integrated to meet functional and security requirements.
There are six properties of the measurement component of an IMS to serve as the beginning of such a vocabulary. They provide several dimensions that have proven useful not only to assess and compare existing IMS but have also helped motivate the design of new IMS. These are not the only dimensions in which IMS could be discussed, and these properties are not intended to be canonical. They do, however, form a good framework for discussions about important aspects of IMS. The measurement component of an IMS should:
- Produce Complete results. A MA should be capable of producing measurement data that is sufficient for the DM to determine if the target is the expected target as required for all of the measurement scenarios supported by the IMS.
- Produce Fresh results. A MA should be capable of producing measurement data that reflects the target’s state recently enough for the DM to be satisfied that the measured state is sufficiently close to the current state as required for all of the measurement scenarios supported by the IMS.
- Produce Flexible results. A MA should be capable of producing measurement data with enough variability to satisfy potentially differing requirements of the DM for the different measurement scenarios supported by the IMS.
- Produce Usable results. A MA should be capable of producing measurement data in a format that enables the DM to easily evaluate the expectedness of the target as required for all of the measurement scenarios supported by the IMS.
Be Protected from the target. An MA should be protected from the target of measurement to prevent the target from corrupting the measurement process or data in any way that the DM cannot detect.
- Minimize impact on the target. An MA should not require modifications to that target nor should its execution negatively impact the target’s performance.
- Tripwire (G. Kim and E. Spafford, The Design and Implementation of Tripwire: A File System Integrity Checker, Purdue University, November 1993) was an early integrity monitoring tool. It allowed administrators to statically measure systems against a baseline. Using Tripwire enables complete integrity measurement of file system objects such as executable images or configuration files. These measurements, however, cannot be considered complete for the runtime image of processes. Tripwire provides no indication that a particular file is associated with an executing process, nor can it detect the subversion of a process.
- Tripwire performs well with respect to freshness of measurement data, and the impact on the target of measurement. Remeasurement is possible on demand, enabling the window for attack between measurement collection and decision making to be quite small. Since Tripwire is an application, installation is simple and its execution has little impact on the system. But because it is an application, the only protection available is that provided by the target system, making Tripwire’s runtime process and results vulnerable to corruption or spoofing.
Tripwire is also limited with respect to flexibility and usability. Decision makers may only base decisions on whether or not a file has changed, not on the way in which that file has changed. Tripwire cannot generate usable results for files which may take on a wide variety of values. These limitations are generally characteristic of measurement systems that rely on hashes, making them most effective on targets not expected to change.
IMA (R. Sailer, X. Zhang, et al., Design and implementation of a TCG-based integrity measurement architecture, Proceedings of the 13th Usenix Security Symposium, pages 223-238, August 2004) and systems like Prima (T. Jaeger, R. Sailer, and U. Shankar, Prima: Policy-reduced integrity measurement architecture, SACMAT’06: Proceedings of the Eleventh ACM Symposium on Access Control Models and Technologies, 2006) which build upon its concepts appear very similar to Tripwire when considered with respect to the described properties, but they do offer significant improvements. IMA’s biggest advance is the protection of the measurement system and its data. Because it is a kernel module rather than user-land process, it is immune to many purely user-space attacks that might subvert the Tripwire process. However, it is still vulnerable to many kernel-level attacks. Subversion of IMA’s measurement results is detectable by comparing a hash value stored in the TPM with the expected value generated from the measurement system’s audit log.
IMA makes more complete measurements of running processes than Tripwire because IMA is able to associate running processes with the recorded hash values. However, results only reflect static portions of processes before execution begins. Because no attempt is made to capture the current state of running processes, fresh measurements cannot be provided to any decision process requiring updated measurements of the running process.
PRIMA extends the IMA concept to better minimize the performance impact on the system. By coupling IMA to SELinux policy (P. Loscocco and S. Smalley, Integrating flexible support for security policies into the Linux operating system, Proceedings of the FREENIXTrack, June 2001) the number of measurement targets can be reduced to those that have information flows to trusted objects. This may also aid completeness in that measurement targets can be determined by policy analysis. The requirement for trusted applications to be PRIMA aware and required modifications to the operating system are development impacts on the target.
CoPilot (N. Petroni, Jr., T. Fraser, et al., CoPilot—a coprocessor-based kernel runtime integrity monitor, Proceedings of the 13th Usenix Security Symposium, pages 179-194, August 2004) pushes the bar with respect to completeness, freshness and protection. Cryptographic hashes are still used to detect changes in measured objects, but unlike other systems, CoPilot’s target of measurement is not the static image of a program and configuration files but the memory image of a running system. It also attempts to verify the possible execution paths of the measured kernel. The ability to inspect the runtime memory of the target is an improvement over file system hashes because it enables decisions about runtime state. Protection from the target is achieved by using a physically separate processing environment in the form of a PCI expansion card with a dedicated processor.
Although a considerable advance, CoPilot fails as a complete runtime IMS in two key ways. It cannot convincingly associate hashed memory regions with those actually in use by the target. It can only measure static data in predefined locations; dynamic state of the target is not reflected. The requirement of additional hardware in the target environment also impacts the target.
Other measurement systems have been developed. Unlike those discussed so far, some use computations on or about the target system rather than employ a more traditional notion of measurement such as hashing. One such system is Pioneer (A. Seshadri, M. Luk, et al., Pioneer: Verifying code integrity and enforcing untampered code execution on legacy systems, ACM Symposium on Operating Systems Principles, October 2005). It attempts to establish a dynamic root of trust for measurement without the need of a TPM or other hardware enhancements. The measurement agent is carefully designed to have a predictable run time and an ability to detect preemption. The measurement results can be fresh but are far from a complete characterization of the systems. Although in theory, this approach could support more complete measurement as long as the property of preemption detection is preserved.
Pioneer was designed to detect attempts of the target to interfere with the measurement agent, but it requires a difficult condition that the verifier to be able to predict the amount of time elapsed during measurement. The impact on the target system can also be great because in order to achieve the preemption detection property, all other processing on the target has to be suspended for the entire measurement period.
Semantic integrity is a measurement approach targeting the dynamic state of the software during execution therefore providing fresh measurement results. Similar to the use of language-based virtual machines for remote attestation of dynamic program properties (N. Haldar, D. Chandra, and M. Franz, Semantic remote attestation—a virtual machine directed approach to trusted computing, Proceedings of the 3th USENIX Virtual Machine Research & Technology Symposium, May 2004), this approach can provide increased flexibility for the challenger. If the software is well understood, then semantic specifications can be written to allow the integrity monitor to examine the current state and detect semantic integrity violations. This technique alone will not produce complete results as it does not attempt to characterize the entire system, but it does offer a way in which integrity evidence about portions of the target not suitable for measurement by hashing can be produced.
Such an approach has been shown effective in detecting both hidden processes and SELinux access vector cache inconsistencies in Linux (N. Petroni Jr, T. Fraser, et al., An architecture for specification-based detection of semantic integrity violations in kernel dynamic data, Security ’06: 15th USENIX Security Symposium, 2006). A very flexible system was produced that can be run at anytime to produce fresh results and that is easily extended to add new specifications. Better completeness than is possible from just hashing is achieved since kernel dynamic data is measured, but no attempt was made to completely measure the kernel. Completeness can only come with many additional specifications. Like CoPilot, a separate hardware environment was used to protect the measurement system from the target and to minimize the impact on the target at the cost of having extra hardware installed. However, it is subject to the same limitations as CoPilot.
SUMMARY OF THE INVENTION
Therefore, the present invention has been made in view of the above problems, and it is an objective of the present invention to provide methods and system for verifying the integrity of a software-based computer system.
In accordance with one aspect of the present invention, the above-mentioned objective is achieved by providing a method for measuring and verifying the integrity of a computer program, the computer program comprising a plurality of modules, each module comprising a plurality of data objects comprising static and dynamic objects, the method comprising the steps of:
- identifying the plurality of data objects using a plurality of attributes relevant to the computer program integrity to produce a baseline of the plurality of data objects from a stored image of the computer program;
- measuring an image of the computer program in a memory without modifying the computer program to produce a measurement manifest comprising the steps of:
- inspecting the identified plurality of data objects;
- generating an object graph for each data object; and
- using the object graphs to produce the measurement manifest;
- and
- comparing the baseline and the measurement manifest to verify the integrity of the computer program.
In accordance with another aspect of the present invention, the above-mentioned objective is achieved by inserting an alert to trigger the computer program whose integrity has been verified to independently measure and verify the integrity of a new module before the new module is loaded into a memory.
In accordance with another aspect of the present invention, the above-mentioned objective is achieved by providing a method for measuring and verifying the integrity of a computer program and modules being loaded from a stored location into a memory comprising the steps of:
- calculating an image of the computer program in the memory using an image of the computer program in the stored location, the relevant runtime information and knowledge of how the computer program will be loaded into the memory; comparing an image of the computer program in the memory with the calculated image of the computer program in the memory; and
- using the comparison to verify the integrity of the computer program in the memory.
In accordance with another aspect of the present invention, the above-mentioned objective is achieved by providing a method for measuring and verifying the integrity of a computer program, the method comprising the steps of decomposing the integrity measurement into a plurality of distinct measurement classes, each measurement class representing a semantically related grouping of variables which have been examined to produce a characterization of an isolated subset of the computer program’s state; and connecting to each measurement class a structured representation of the measurement of those objects which contribute to the overall measurement of that class.
Program execution integrity is an inventive approach for measurement and verification of computer program integrity. The unique features include: dynamic data inspection, event triggers, and a manifest of results. Data objects are inspected during runtime to provide an increased level of confidence in the integrity of the running program. False integrity failures due to dynamic changes at runtime are prevented via runtime monitoring and triggers inserted into program code. Measurement results are time-stamped and stored in a manifest.
Data objects are identified by security relevant attributes: state values, function pointers, and references to other objects. Static objects are located by the address assigned at compile time. Measurement begins by inspecting the static objects of interest, which include containers of dynamic
objects. References found in objects being inspected reveal other dynamic objects. The object graph, with state information and function pointers for each node, are captured in the measurement manifest. The fine granularity of results facilitates partial re-measurement and flexible policy enforcement during verification.
To support dynamic module loading, a trusted computer program responds to triggers by verifying modules and updating the measurement baseline. This fail-safe approach is designed to prevent false failures of integrity checks without impacting the capability of detecting the insertion of foreign program code. The program triggers the monitor before loading a new module. The monitor independently verifies the integrity of the new code before it is loaded into memory. The security relevant attributes of the module are computed based upon the image stored on disk and its target location in memory. These trusted attributes are entered in the baseline and the static objects of the new module are added to the list of items being measured.
The effectiveness of this approach has been demonstrated in a Linux Kernel Integrity Monitor (LKIM) embodiment. LKIM baselines the built-in operation structures from a kernel image on disk. It measures a kernel image in memory without the need to modify the existing kernel. The baseline and measurement processes each produce a textual form that can be used to verify a runtime measurement with the baseline. LKIM extends measurement to modules by introducing small modifications to the Linux kernel in order to provide triggering events for the measurement process. The module baseline is produced dynamically by computing the hash of the module text in memory from the module file on disk and the location in memory where the module is being loaded. The module file on disk is also hashed and recorded. When later measurements are performed, any operational structures introduced by the module will not cause a false failure of integrity.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other objects, features and advantages of the invention will be apparent from the following Detailed Description Of The Invention considered in conjunction with the drawing Figures, in which:
FIG. 1, comprising FIGS. 1(a)-1(d), illustrates four possible IMS layouts: 1(a) the MA shares the Target's execution environment and the DM is on a physically distinct host; 1(b) the MA runs on the same host as its target but is isolated using, e.g., dedicated hardware or virtualization and the DM remains on a physically separate host; 1(c) the MA, DM, and Target are each on a dedicated host; and 1(d) the MA and DM share a single host execution environment and the Target runs on a dedicated host.
FIG. 2 illustrates a file data structure that includes the f_op field which points to a file_operations object. The file_operations data structure contains pointers to functions that operate on the file object.
FIG. 3 illustrates that for a given file object, the measurement module checks its f_op field to see if it contains the address for a known file operations structure.
FIG. 4 illustrates the process for measuring dynamically loaded program code.
FIG. 5 is a diagram depicting a portion of the measurement graph for the VFS.
FIG. 6, comprising FIGS. 6(a) and 6(b), illustrate one embodiment of the LKIM invention: 6(a): architecture and dataflow and 6(b): example Measurement Data Template.
FIG. 7, comprising FIGS. 7(a) and 7(b), illustrate two LKIM performance perspectives: 7(a): processing timeline and 7(b): impact on target.
DETAILED DESCRIPTION
The overall objective of the invention is to provide the ability to verify the integrity of a software-based computer system. In particular, the invention is a solution for measuring the integrity of an operating system's execution and ensuring that the system's code is performing its intended functionality. This includes examining the integrity of the code that the operating system is executing as well as the data that the operating system accesses. Note that the objective of the invention is not to provide a secure operating system or to prevent malicious behavior from occurring. Rather, it is, in one embodiment, to provide a measurement capability to detect unauthorized changes to code executing in Ring 0 (also known as a kernel mode).
The integrity of the system's execution will be measured at boot time and may be measured subsequently, both on a periodic basis or in response to a request. Re-measuring the kernel's integrity is necessary due to the dynamic nature of the kernel's execution and the fact that the state of the kernel's data, and possibly even code, will change over time. Remote computer systems may retrieve the measurement results for a particular computer system and use those results to determine whether or not the system's integrity is intact. In this manner, a computer system may determine which remote systems it can trust and interact with. In some cases, a remote computer system may simply want to retrieve the most recent measurement results for a particular system rather than trigger a complete re-measurement of the kernel's integrity. Therefore, in addition to measuring the integrity of a system's execution, the invention is responsible for providing an attestation service that stores measurement results in a secure location. These results are then made available to remote systems.
In one embodiment, measurement of the integrity of a computer system is based on utilizing layers of trust. A first layer can be a set of hardware enhancements that allow programs to run in isolated execution environments by protecting memory, input/output, and other system resources from being accessed by other applications; an example of such enhancements is Intel's LaGrande Technology (LT). An LT-based platform includes a Trusted Platform Module (TPM), a device bound to the platform that provides a hardware-based mechanism for protecting access to cryptographic keys and other secret data. The TPM can be used to securely store system measurement results for attestations. In addition, LT provides hardware mechanisms for protecting the launch of a system's Domain Manager (DM), which in turn is responsible for launching separate domains for executing applications. In this way, LT provides a trusted hardware platform, or the "root of trust," on which the operating system and low-level code can run. Since LT ensures the integrity of the hardware, the hardware can then be used to measure the trustworthiness of the operating system running on top of it. Verifying the integrity of this operating system layer is the area of focus for the invention. The concept of using layers of trust can be extended all the way up to the application level; once the operating system can be trusted, it can be used to measure the integrity of higher level software components such as user processes and network communications.
The idea of building on increasing layers of trust can potentially be applied to the operating system itself. For example, if we can separate the operating system into clearly defined components and first measure the integrity of its most basic
functions, then we can use those basic functions as building blocks for measuring additional components of the operating system which in turn can be trusted once they have been verified. This process can be repeated until the entire operating system has been measured and is trusted. This approach provides the benefit of using the functions provided by the operating system to measure itself, rather than having to re-write functionality such as loading a page from disk into memory. Separating an operating system into layers of trust can be an effective approach for a microkernel architecture, which defines a very small set of functions within the kernel, with other system processes such as memory allocators and system call handlers running on top of the microkernel. In contrast, the Linux kernel has a monolithic architecture, with data structures and functions that are highly interconnected with each other.
The central approach herein for measuring the integrity of the Linux kernel focuses on examining the integrity of the kernel’s execution path, since malicious code introduced into a kernel cannot cause harm unless it is executed. The measurement of the integrity of the kernel’s execution path is broken down further into two components: measuring the integrity of the kernel’s code and measuring the integrity of the kernel’s data.
The “integrity” of the kernel code is defined as being intact if that code has been installed or approved by a system administrator or some trusted authority. Detecting bugs and security vulnerabilities inherent in the approved code is outside the scope of the invention; instead, the goal is to ensure that the code that is installed or approved by the system administrator is the only code that is allowed to execute. This is achieved by generating a cryptographic hash of the code for the kernel using the MD5 hash algorithm and comparing it against a securely stored “golden hash” which is generated from a trusted version of the kernel.
In addition to verifying the integrity of the kernel code, we want to ensure the integrity of kernel data structures or objects that may affect the kernel’s execution path: for example, if an attacker is somehow able to change the address for a file object’s _i_op field which contains the address for its file operations, then future calls to the file’s operations may result in the execution of arbitrary code instead of the standard kernel-defined file operations. Therefore, it is important to ensure that the data structures accessed by the kernel contain values that fall within a set of values that are acceptable and approved for that particular data structure.
Compared to measuring the integrity of the kernel code, measuring kernel data involves a larger problem space and poses some new challenges. Kernel data structures that are contiguous and whose values are not expected to change, such as the interrupt descriptor table and global descriptor table, can be hashed and measured in the same manner as the kernel code. However, most kernel data structures contain values that can change quite frequently and may or may not be correct depending on the overall state of the kernel. Measuring the integrity of the kernel data structures requires an understanding of how the data structures and individual fields within those data structures are used by the kernel. Furthermore, the Linux kernel can potentially use thousands of instances of data structures. Some data structures, such as task_struct and inode, can be accessed frequently in the course of normal kernel execution, while other data structures might be used less frequently or have a lesser impact on the kernel’s execution path. Therefore, in order to effectively measure the integrity of the kernel’s data, we must prioritize which data structures and fields are mostly likely to affect the kernel’s execution. Table 1 below contains an outline of measured Linux kernel data structures.
<table>
<thead>
<tr>
<th>TABLE 1</th>
</tr>
</thead>
<tbody>
<tr>
<td>System call table</td>
</tr>
<tr>
<td>Checked system call dispatch table (sys_call_table) - 256 entries</td>
</tr>
<tr>
<td>Superblocks</td>
</tr>
<tr>
<td>Iterated through global list of superblocks (super_blocks)</td>
</tr>
<tr>
<td>Checked superblock operations (superblock->_s_op)</td>
</tr>
<tr>
<td>Checked disk quota operations (superblock->_dq_op)</td>
</tr>
<tr>
<td>Checked list of dirty inodes (superblock->_s_dirty)</td>
</tr>
<tr>
<td>Checked inode operations</td>
</tr>
<tr>
<td>Checked files assigned to superblock (superblock->_s_files)</td>
</tr>
<tr>
<td>Checked file operations</td>
</tr>
<tr>
<td>Iterated through ino_inode_and inode_unused lists (all inodes will be stored in either 2 lists or the list of dirty inodes associated with a superblock)</td>
</tr>
<tr>
<td>Checked inode operations (inode->_i_op)</td>
</tr>
<tr>
<td>Checked default file operations (inode->_i_fop)</td>
</tr>
<tr>
<td>Checked dentries assigned to inode (inode->_i_dentry)</td>
</tr>
<tr>
<td>Checked dentry operations</td>
</tr>
<tr>
<td>Checked address_space assigned to inode (inode->_i_mapping)</td>
</tr>
<tr>
<td>Checked address space operations</td>
</tr>
<tr>
<td>Checked inodes in use - see superblock</td>
</tr>
<tr>
<td>Checked superblock</td>
</tr>
<tr>
<td>Checked memory regions (vm_operations_struct)</td>
</tr>
<tr>
<td>Iterated through list of memory descriptors starting with init_mm and looked at mm_struct_rmap field which contains a link of memory regions (vm_area_struct)</td>
</tr>
<tr>
<td>Checked memory region operations (vm_area_struct->vm_ops)</td>
</tr>
<tr>
<td>Address space</td>
</tr>
<tr>
<td>Block devices</td>
</tr>
<tr>
<td>Checked at hash table of list of block device descriptors (bdev_hash)</td>
</tr>
<tr>
<td>Checked the block device operations (block_device->bd_operations)</td>
</tr>
</tbody>
</table>
In terms of determining which kernel data structures and fields can have a significant impact on the kernel’s execution path, we have identified function pointers, system calls, and modules as primary areas of concern. Function pointers, such as the file operations field of the file object mentioned above, point to functions that operate on the parent object (see FIG. 2).
System calls are a special type of function pointer, and intercepting system calls is a common technique in compromising the Linux kernel. A brief search of the Web resulted in tutorials describing how to intercept system calls for hiding the existence of files, processes and modules, changing file permissions, and implementing backdoors in the Linux kernel. Modules play an important part in determining the integrity of the Linux kernel since they execute in kernel mode on behalf of a user process and can alter the contents of kernel data structures. Modules are often used to implement device drivers but can also be used to implement file systems, executable formats, network layers, and other higher-level components in the Linux kernel. The ability to load modules into the Linux kernel provides a useful way for both measuring the kernel’s integrity and introducing malicious code into the kernel. Modules introduce some additional considerations when measuring the integrity of the kernel’s data structures; these are described in more detail below.
The measurement capability of the invention has, in one embodiment, been implemented as a Linux kernel module, although in the future the measurement code may run on an LTU-based platform. The module verifies the execution of the Linux kernel running on a standard Intel x86 computer. For the purposes of implementing a prototype, the integrity of the Linux kernel version 2.4.18 was measured; however, the gen-
eral concepts used herein to measure the Linux kernel can be applied to other versions of Linux, other Unix variants, and potentially to other operating systems. It is assumed that the measurement code, along with the golden hash and the set of approved values for kernel data structures, will be stored in a secure location, such as the one provided by the TPM.
In addition to implementing the measurement capability as a module, we also needed to instrument the Linux kernel itself in order to support the ability to update the "approved" set of values for kernel data structures. In order to compare the values of kernel data structures against a set of approved values, we first needed to determine what the set of approved values were for a particular data structure. For example, the f_op field of an file object should only point to file operation structures defined in the Linux kernel, such as ext3_file_operations or shm_file_operations, or to file operation structures defined in an approved loaded module (see FIG. 3).
Since we know that the Linux kernel in its original state cannot make any changes to its own code during runtime, the only legitimate way to add new code and new approved values for kernel data structures is through modules. Therefore, in order to detect when a loaded module has introduced new approved values for data structures, we added "trigger points" throughout the kernel to trigger re-measurements of the kernel whenever a module is loaded, unloaded, or when a module operation has been called. These triggered measurements occur immediately before and after a potential kernel change; the first measurement is necessary to verify the kernel's integrity before any new changes are introduced, and the second measurement is necessary to determine whether or not the list of approved values for kernel data structures should be updated. In this way, authorized changes that occur at expected points in the kernel's execution are recognized and accounted for while unauthorized changes to the kernel will be detected the next time it is measured.
Unauthorized changes to the kernel that can be detected using the strategy outlined above include:
- Modules that are loaded into the kernel but bypass the insmod user process;
- Changes to the system call table made by entities other than approved modules;
- Alterations to the kernel execution via changes to function pointers; and
- Any changes to the kernel and module code.
Unauthorized changes to the kernel that would not be detected include:
- Changes to the kernel that take place between measurements in which the kernel is restored to its previous state before the next measurement; and
- Changes to data that do not result in a change in the kernel's execution content or execution path; new function pointers may be introduced, but they are not called anywhere in the kernel and therefore will not be executed.
While as noted, detecting bugs and security vulnerabilities inherent in the approved code kernel is outside the scope of this project, if an attacker were to exploit an existing security vulnerability to gain access to the kernel and change its code or certain data structures, those changes would be detected.
The Linux kernel in its original state should not be able to modify any of its own code during execution. However, loaded modules have the potential of modifying kernel code and data and are a means for updating kernel functionality without having to re-compile the kernel. The code and data structures introduced by a loaded module can have a significant impact on the kernel's execution path; for example, a module may introduce a new system call handler function that replaces the one normally executed by the kernel. Therefore, we want to ensure that only modules approved by a system administrator or some trusted authority are allowed to be linked to the Linux kernel.
In order for modules to be loaded into the kernel, they must be accompanied with a registration "form" that is filled out and signed by a system administrator. The form includes a golden hash of the module's code as well as a list of changes that the module is allowed to make. These changes could include making changes to the system call table or interrupt descriptor table or updating function pointers (for example, pointing them to a new inode operations structure for a new file system). Whenever a module performs an allowed change, the system will be re-measured and the set of approved values will be updated to accept any new values that have been set by the module. If a module performs a change that is not allowed, the change will be detected the next time the system is measured.
Another consideration with loading approved modules is the need to ensure that the module that was approved is the same as the module that has been loaded into the kernel. The hash of the module stored in the registration form is generated while the module is still stored on disk, before it has been loaded into the kernel. We cannot compare the hash of the module stored on disk against the hash of the module loaded in memory since they will not match. Furthermore, we cannot even compare against a golden hash of the module previously loaded in memory because the hash of the module will vary depending on the order in which it was loaded with other modules. However, if we first measure the integrity of the kernel as well as the insmod user process which is responsible for linking the module into the kernel, then we can reasonably certainly that the approved module was properly loaded into the kernel.
As a program is loaded from a stored location into memory, it may be altered based on information that is only available at runtime, such as the address in memory to which it is being loaded. The invention, in another embodiment, reproduces the memory image of the program, given the stored image, the relevant runtime information, and knowledge of how the program will be loaded into memory. It does not rely on an operating system kernel or any other mechanism to load it properly, since it mimics the process externally. An external entity can then compare the in-memory image of the program to the calculated image, and determine whether the in-memory image has been corrupted. This is illustrated in FIG. 4.
In the Linux kernel, there are two parts of executable kernel code that may change when loaded into memory: the text of any kernel modules that the kernel loads, and the text of the kernel itself. Loadable modules, which the kernel loads after it boots, contain "relocation" sections that describe how to alter the module's text depending on the where in memory the kernel loads it. The notification process the Linux kernel uses when it loads or unloads a module has been modified. When a module is loaded, the kernel notifies some entity (e.g., a user-space process or a process in another virtual machine) that a module is being loaded, and indicates which module it is loading and where in memory it is loading it. This is sufficient information for the external entity to reproduce the in-memory image of the kernel module.
Unlike most executable programs, the Linux kernel itself contains only absolute addresses, and thus does not need to be relocated. However, the Linux kernel provides for the opportunity to replace individual program instructions based on bugs or extra capabilities in the currently executing processor. Information detailing which instructions to replace and how
to replace them is stored in the "altinstructions" section of any program, including in the kernel. The kernel applies these changes to itself early in the boot process, and it applies them to any program it loads, including kernel modules. The "altinstructions" information is contained inside the stored image of a program, and for the invention to work, it is required to know which features of the processor cause the kernel to apply altinstructions.
The Linux kernel makes similar alterations to itself and to programs it loads to adjust for computers with multiple processors. The same technique described above applies in this scenario as well.
Representation of integrity measurement data as a measurement graph provides a direct reflection of a processes internal object graph for analysis by a challenger. Measurement graphs decompose a single complex measurement into several distinct measurement classes. Each measurement class represents a semantically related grouping of variables which have been examined to produce a characterization of some isolated subsystem of the targets connected. Each measurement class is a structured representation of the measurement of those objects which contribute to the overall measurement of that class. This representation is derived from the measurement target's in memory object graph and indicates not only the measurements of atomic objects, but also the way in which these low level objects are connected into compound structures. For transmission, the Measurement Graph can be encoded using a descriptive markup language which supports cross references (e.g., XML), as entries in a relational database, or in any other format for encoding directional graphs with valued nodes.
By inspecting the measurement graph, a challenger may determine the answers to questions concerning not only the values of certain key structures, but also the linkages between interesting components. For example the challenger may determine if the statement "for all instances, I, of structure X, if I references the object O, then I also references the object M" is true.
For example, a measurement of the Linux Kernel's process image may include a traversal of structures in the virtual file system (VFS). In the VFS, super block structures reference lists of inodes. All of these structures also reference tables containing function pointers for file system specific implementations of standard file related operations (such as open, close, read, write, etc). A portion of the measurement graph for the VFS is depicted in the diagram in FIG. 5. Here, the VFS is a measurement class which includes measurements of two different super blocks. Each super block measurement comprises a measurement of its operations table, and measurements of each inode referenced by that super block. Similarly, inode measurements are composed of a measurement of their operations tables, and may also include measurements of other structures such as files or dentrys. The values of the terminal measurements (those of the tables of function pointers) would likely be the actual values of each function pointer in the table.
For transmission to a challenger, one could encode this measurement graph as an XML document similar to:
```
<MeasurementClass name="VFS">
<CompoundMeasurement type="superblock" id="c041da00">
<TerminalMeasurement type="super_ops" id="c031c3c0">
c017ce1b, c017506, c017c3e0, 0, 0, 0, c016869b, c017e250, 0, 0, 0, 0, 0, c016f525, c017e491, 0, 0, 0, 0
</TerminalMeasurement>
</CompoundMeasurement>
</MeasurementClass>
```
This encoding allows the challenger to recreate the original measurement graph structure, which can then be analyzed for compliance with certain expected properties. A simple example may be that the operations tables should all be identical to tables listed in a predefined set of expected values. A more involved criteria may be that all of the inodes referenced by a super block with operations table c031c3c0 should reference the inode operations table c041979c.
In another embodiment of the invention, the Linux Kernel Integrity Monitor (IKIM) serves as a measurement agent (MA) for an generic purpose IMS designed to support many different measurement scenarios. The system is capable of selecting from an extensible set of attestation scenarios governed by a system policy, taking into account requirements for security and privacy. In performing particular attestations, this IMS is capable of selecting appropriate measurement techniques by initiating appropriate MAs.
IKIM was designed for a complex IMS environment and is intended to meet the needs of several measurement scenarios. The measurement properties have greatly influenced IKIM’s design. Although IKIM’s implementation is specific to measuring the Linux kernel, the techniques it employs are general and will apply equally well to other operating systems and complex software systems needing measurement. This technique has been applied to the Xen (P. Barham, B. Dragovic, et al., Xen and the art of virtualization, Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, pages 164-177, 2003) hypervisor.
IKIM uses contextual inspection to more completely characterize the Linux kernel. It produces detailed records of the state of security relevant structures within the Linux kernel, and can easily be extended to include additional structures. IKIM cannot only produce measurements at system boot time but also in response to system events or demand from the IMS. Measurement data is stored in a useful format that allows an IMS to retrieve any or all of the raw data. The IMS
can use the data, or any transformation on any portion it, according to the requirements of particular measurement scenarios and under the control of system policy.
A simplified view of LKIM's architecture is depicted in FIG. 6(a). LKIM has been developed to support two deployment scenarios: Native and Xen. There are significant differences in the measurement inspection mechanisms for each case. In the native scenario, LKIM is a user process within the target space and access to kernel memory is through /dev/kmem. On Xen, LKIM runs in a Xen domain distinct from the target's domain. The Xen hypervisor maps the target kernel's memory into the address space of LKIM's domain.
LKIM measures the Linux kernel using contextual inspection, striving for maximal completeness. This technique attempts to overcome many of the limitations of hash-based measurements, specifically the inability of hash-based measurements to uniquely identify systems with a large number of expected states and the inflexibility of the results generated by a hash-based system. Inspection uses detailed information about the layout of key data structures to traverse portions of a running process' object graph. This traversal is used to produce a detailed report which describes the structure of the explored subsystems and the current state of identifying variables within those systems.
Contextual inspection is a powerful technique that enables measurement systems to achieve better completeness than would be possible with hashing alone. It produces rich results that can reflect unpredictable structures. However, this richness of detail typically leads to a substantial increase in the size of the results produced, which may be far less usable than a hash-based measurement of a system that could have been effectively measured by either technique. A combination of hashing and contextual inspection allows measurement systems to locate and succinctly identify attributes of targets. The results can represent the structural information gathered by the contextual inspection portion of the system and the concise fingerprints generated by hashing. This combination requires more processing than a single hash of a system with only a few possible states, but results can be analyzed by a challenger in a reasonable period of time.
LKIM combines traditional hash-based measurement with contextual inspection. It uses contextual inspection to provide identifying measurements of the execution path of a running Linux kernel. It not only hashes static regions of the kernel such as its text section, system call table, interrupt descriptor table (IDT), and global descriptor table (GDT) but also traverses relevant portions of the target's object graph and the layout of various kernel structures. This traversal produces evidence that indicates the location referenced by function pointers stored in dynamic data structures and the context in which they were detected. This allows a challenger to verify not only that the execution path is entirely within the hashed text section but also to perform sanity checking based on expected groupings of function pointers.
LKIM breaks up the measurement process into a series of discrete measurements according to a set of measurement variables. These variables identify those portions of the target that LKIM can individually inspect. They are arranged hierarchically to enable LKIM to perform increasingly complete measurements of each piece of the kernel that LKIM is able to measure.
LKIM is governed by a set of measurement instructions indicating which measurement variables are of interest during a given run. A local configuration file defines the measurement instructions, giving the address and type information of top-level measurement variables. Alternatively, LKIM can receive its measurement instructions directly from an IMS. This greatly enhances the flexibility of the IMS by enabling it to selectively vary the measurement data produced according to the requirements of a particular attestation scenario.
Measurement variables are grouped into measurement classes, each a vertical slice of the measurement variable hierarchy with successive levels providing LKIM with additional contextual information for measuring a particular part of the kernel. Top-level variables are just starting points from which many kernel variables will be examined. To measure a portion of the kernel, LKIM uses the corresponding top-level variable to find the appropriate location in its target's address space. According to the specific technique associated with the variable, LKIM then performs the measurement, recording any relevant properties detected. As prescribed by the measurement instructions, measurement proceeds recursively with increasingly lower levels of the class being inspected until the indicated degree of completeness is attained.
For example, a measurement class for the Linux Virtual File System (VFS) has been defined to include the following measurement variables: inode in use, inode unused, and super blocks. Each of these variables reference linked lists in the kernel containing the state of inodes dynamically created by the kernel. LKIM is capable of measuring the state kept in each list, including tracing the pointers to VFS operations associated with each inode. LKIM's configuration file might include instructions to measure the VFS class, with the three measurement variables in it being used to select the exact portions of the VFS subsystem to be measured. Whenever LKIM runs, the data will include information about the linked lists referenced by the variables.
LKIM supports other measurement classes to selectively measure Linux. Included are classes for static portions like the kernel text and system call table, as well as dynamic portions like the executable file format handlers, the Linux Security Module (LSM) hooks, and parts of the block IO and networking subsystems. Parts of the kernel can be precisely measured with techniques such as hashing. In others, imprecise heuristics are the best known technique. Because LKIM uses measurement variables to control its operation, different measurement techniques can be assigned to different measurement variables. This enables each portion of the kernel to be measured using the most appropriate technique, yielding the best potential for completeness.
Although the total set of measurement variables that LKIM understands does not provide complete coverage of the Linux kernel, LKIM can easily be extended to measure additional portions of the kernel. Where existing measurement techniques are appropriate, new measurement classes and/or variables simply need to be defined and included in measurement instructions. As new or improved techniques are developed and incorporated into LKIM, measurement variables can be redefined to enhance measurement data quality or new variables can be defined to augment the data already collected.
Baseline capabilities were introduced into LKIM to supplement contextual inspection. Baselines are generated to create the structure definitions that indicate how LKIM handles the measurement process for particular measurement variables. Baselines can also be used by an IMS decision process to help validate measurements provided in an attestation. FIG. 6(a) also shows the baselining process. There are two forms of baselining in LKIM: static and extensible.
Static baselining enables LKIM to generate baseline measurements using the on-disk ELF image of the target kernel. LKIM parses standard DWARF debugging information that can be generated at compile time (Tool Interface Standards Committee, DWARF Debugging Information Format Speci-
ification v.2.0, May 1995, and Tool Interface Standards Committee, Executable and Linking Format (ELF), v.1.2 edition, May 1995), yielding the necessary data to associate regions of memory with particular structure types. LKIM can then decode and measure variables initialized at compile time. Although not all relevant structures can be baseline in this way, many common subversions infect structures such as file or inode operations tables (J. Levine, J. Grizzard, and H. Owen, Detecting and categorizing kernel-level rootkits to aid future detection, IEEE Security and Privacy, 2006) which are typically initialized at compile time.
Static baselining addresses a major problem of runtime measurement systems; performing baseline measurements of a running image may not yield a representation of the true expected configuration. The image may already have been subverted when the baseline is performed. This problem is specifically identified in (N. Petroni, Jr., T. Fraser, et al. Cogito: a processor-based kernel runtime integrity monitor. Proceedings of the 13th Usenix Security Symposium, pages 179–194, August 2004) as a major shortcoming. Because LKIM uses a static baseline that is generated off-line in a safe environment, a system owner can be confident that integrity decisions using the baseline will be made relying on an accurate notion of the expected configuration. The dynamic nature of target systems makes static baselining insufficient. Extensible baselines solve this problem. When a change in the target is detected, the system can be re-baselined, changing the measurement instructions used by LKIM as necessary. The updated baseline could be propagated to any relevant decision process, optionally allowing it to update its behavior.
Linux Kernel modules are difficult to accurately measure because they are relocated at load time. Hashing is unsuitable for modules because hash values will only be predictable for the original module image and not the relocated version that will execute. Addresses of key data structures cannot be known until relocation. For example, modules are commonly used to support additional file system types. Such modules include tables containing pointers to functions that provide file system-specific implementations of standard operations like read and write. Addresses of these functions are unpredictable because they depend on the relocation.
Linux has been modified to notify LKIM whenever modules are loaded or unloaded, making the module’s name and address of each section available. On module load events, LKIM uses this information to simulate the loading process on a copy of the module. LKIM extends the current baseline file with data acquired by inspecting the module and adds directives to the measurement instructions to cause the module’s data to be re-measured when handling subsequent measurement requests. On module unload events, LKIM reverses the—depending on the
It is not possible for LKIM’s module handling capabilities to achieve complete measurements because there is no mechanism by which LKIM is able to generate a complete and reliable characterization of all modules which are or have been loaded into the kernel. This is not an issue for persistent components of the module such as its primary text section and global data because these sections are located by LKIM and added to the measurement instructions for future measurement requests. However, loadable modules may specify an initialization section which is executed at load time and then released. Such ephemeral module sections may introduce changes to the kernel which would not be connected to the modules main text body or the rest of the kernel’s object graph. If measurement is not synchronized to module loading, the initialization section will go unmeasured.
Unfortunately, it is difficult to ascertain exactly which module is being loaded because the cooperation of the measured kernel would be required. Clearly, the kernel’s notification could be instrumented to additionally provide a hash of the on-disk image of the module. Careful reasoning must be applied to verify either that the measured kernel cannot be lying and thus the hash must really correspond to the module being loaded, or that the measured kernel can only lie in a way that will be detected by later measurements. An alternate scheme may be to force the kernel to consult a trusted module server or validator before it is able to load a new module. This approach would require a similar argument to be made which ensures that the kernel is unable to surreptitiously bypass the new component when loading modules.
Remeasurement for an IMS is a means to help achieve freshness of measurement data. LKIM supports measurement of a running Linux kernel on demand. Remeasurement can be simply achieved by running LKIM again. Remeasurement might be necessary as a response to requests from an IMS trying to satisfy the freshness requirements of some attestation scenario. As an example where this might be useful, consider a requirement that measurement data be produced within a certain time period prior to attestation. The IMS can satisfy that scenario by requesting that LKIM produce fresh measurements prior to responding to the attestation request.
LKIM’s design also has provisions to attempt to identify conditions which will cause the most recent measurement data collected to no longer reflect the current state of the system, and hence limit the effectiveness of future integrity decisions based on that data. By recognizing such conditions LKIM would be able to anticipate that a remeasurement is necessary prior to being asked by the IMS. LKIM has been designed to respond to external events such as timers indicating that the measurement data is stale and a remeasurement needs to be scheduled. The design also allows for the possibility that the target system be instrumented with triggers that will allow a cooperating operating system to notify LKIM that some known event has occurred that will invalidate some or all of the most recent measurement data. Although triggers are useful to reduce response times to requests for measurement data, they are not necessary for correct operation, and LKIM still works when it is not possible to modify the system. To date, the only triggers that have been implemented in LKIM are those that indicate a change in the set of loaded kernel modules. However, the triggering mechanism is present, making it straightforward to add additional triggers as needed.
LKIM was designed for flexibility and usability in the way that data is collected and subsequently reported. It achieves this through its Measurement Data Template (MDT). Whenever LKIM runs, collected raw measurement data is stored in the MDT. The MDT has been custom-designed for the target system to enable LKIM to store enough data to meet the maximum possible requirements for completeness. The MDT is formatted to add meaningful structure to measurement data. LKIM stores measurements for different parts of the system in whatever way is appropriate for the measurement technique being used for that part of the system. If a hash is suitable for one section, the MDT would contain the hash value at the appropriate location. If some section warrants a more complex measurement strategy, the corresponding section of the MDT would contain whatever data was produced.
As new measurement strategies are developed making more complete measurements possible, it is a simple matter to extend the definition of the MDT to allow the new form of measurement data to be reflected in the results.
FIG. 6(b) shows a partial MDT customized for Linux and rendered in HTML. The data are hierarchically arranged by measurement class as prescribed in the measurement instructions, forming a tree from a specified top level variable to the leaf object of concern (e.g., a function pointer). The MDT is stored in XML. It contains hashes of the static regions and detailed information regarding which collections of function pointers are active in the kernel, how many objects reference those collections, and the target address of each function pointer.
LKM’s use of the MDT supports flexibility and usability in the way measurement data can be reported. Since the different portions of the MDT characterize only pieces of the entire system, LKM is able to support varying degrees of completeness requirements by selectively reporting those portions of the MDT as required by the IMS. Reporting can be customized for different scenarios to report all of the data or only the portions required. It is possible to customize even further by reporting functions of all or part of the MDT. This can be useful in situations like one where only a hash of the MDT was deemed necessary by the IMS. The degree of flexibility made possible by the MDT would be very difficult to achieve using a system that only captures a single measurement result for the entire system.
The use of the MDT also supports freshness. When remeasurement is necessary, only those portions of the MDT for which the IMS needs freshers measurements need to be recalculated. This should reduce the impact of remeasurement by not performing wasteful remeasurements on portions of the system.
LKM’s use of an MDT enhances an IMS’s ability to meet privacy requirements. When measurement produces data that should not be released in all attestation scenarios, an IMS can dynamically filter the MDT depending on the concerns of the current scenario. Using a MDT supports privacy by enabling an IMS to allow sensitive portions of the measurement data to be sent to trusted third parties so that they may perform the attestations to entities that are not entitled to see the private data. This has the secondary benefit of relieving the burden of integrity decisions at the systems that initiated attestations by allowing specialized systems to be used.
This hierarchical structure of the MDT allows selective reporting of measurement data on any or all of the kernel subsystems. The MDT includes freshness information in the form of time stamps. Depending on the completeness requirements for the current situation, LKM can select different portions of the MDT, pruning the tree as required. Remeasurement can be selectively performed on sections as needed. For example, a simple scenario may require only the hash of the kernel text, but from the same MDT, more complex scenarios can also be supported. Along with the kernel text’s hash, a report on function pointers might be required so that it can be verified that all function pointers refer to locations in the text section that are represented in the baseline. As an even more complex scenario, the report might require additional information about function pointer groupings (i.e. pointers stored in the same structure) so that it can be determined that they are similarly represented in the baseline. Using the MDT, LKM is able to support each of these scenarios without modification.
To investigate the feasibility of contextual inspection, LKM was initially implemented as a Linux kernel module. It executed out of the same address space as the target Linux system, using the kernel to report the measurement results. Although this initial system produced encouraging results with respect to completeness and freshness, there was a noticeable impact to the target kernel. To address this, LKM was moved into a user-space process, accessing kernel data through the /dev/kmem interface. Moving to the richer user-space environment had the additional benefit of enabling LKM’s data reporting to be enhanced.
Although LKM could be deployed like this today, it is not recommended. There is no way to protect LKM from Linux. In fact, Linux must cooperate with LKM if any measurement data is going to be produced at all, as the LKM process is totally dependent on Linux for all resources that it requires. The quality of the data collected will always be questionable since Linux would be free to misrepresent true kernel state to LKM.
To address protection concerns, LKM was ported to the Xen hypervisor. The Xen architecture allows functionality to be isolated in virtual machines (VM). LKM was placed in a separate VM from the target Linux system and uses Xen memory mapping functionality to gain the necessary access for measurement. By separating LKM in this way, LKM’s operation can be protected from Linux, allowing it to perform its measurements and store results without fear of interference from Linux.
This approach succeeds in removing the measurement system from the direct control of the target operating system. However, more is required. With LKM running in a separate Xen VM, an ability to produce measurements about LKM and the Xen hypervisor might be necessary to satisfy completeness requirements. Linking all measurements to a hardware root of trust using a TPM could also be required. An IMS designed to use LKM running in a VM should address these issues.
The contextual inspection approach used by LKM comes at a significant cost in terms of impact on the target and complexity for the decision process. However, the gains in flexibility and completeness can justify this expense. Especially if the target is vulnerable to compromises that cannot be detected by hashing. This is the value proposition for LKM.
The ability to detect rootkits that only infect dynamic data has been demonstrated by LKM. Detecting modifications to the kernel text area and static data areas can be accomplished with a hash. However, the adore-ng rootkit targets the Linux Virtual File System (VFS) data structures that are dynamically created in kernel data (J. Levine, J. Grizzard, and H. Owen, Detecting and categorizing kernel-level rootkits to aid future detection, IEEE Security and Privacy, 2006). It redirects the reference to file system operations on the /proc file system to new operations introduced by the module. By traversing the list of active inodes, LKM reports the existence of the reference to the adore-ng code. A verification check with the baseline of allowable operations then detects its existence. This allows a challenger to detect many redirection attacks by comparing the measurement of a running system to a baseline generated from the static image of an approved kernel.
Performance considerations of an integrity measurement system design include the impact on target performance, the response of the measurement agent to requests for measurement data, and the time it takes the decision maker to process measurement data. The initial analysis of LKM’s performance focuses only on the first two concerns. Since the measurement agent and the target operating system share computing resources, reducing the impact on the target may come at the cost of a longer response to measurement requests and vice versa. This assumes that the number of data structures inspected is the same, however the workload of the target operating system determines the number of data structures LKM inspects.
Testing was performed using a standard desktop platform and two simulated workloads. The same hardware platform (Hardware: Dell Optiplex GX400 w/1 GB of RAM) was used for both Xen and Native configurations. (Xen config: two 2.6.16.13-seci/kern-2.6.16.13.1.rpm_v250/256 MRAM. Native config: a standard Linux 2.6.16.13 Kernel/svn:250/256 MRAM.) Resource contention between LKIM and the target workload is managed by a scheduler and consequently the measurement duration is determined by priority given to the measurement agent; default scheduling algorithms were used in both configurations. Target kernel workloads were simulated by the Webstone benchmark and a build of the Linux kernel. Webstone (Mindcraft, Inc., http://www.mindcraft.com, WebStone 2.x Benchmark Description) performs mostly I/O processing and the number of measurement variables is a function of the number of clients hitting the server. The kernel build workload provides a combination of I/O and CPU utilization while creating a large number of variables for LKIM to measure. In all cases, the set of measurement instructions included the full set of classes described above.
FIG. 7(a) shows the processing timeline for LKIM under each workload configuration. Using the Xen control mechanisms, LKIM is able to suspend the target during measurement: the timeline shown represents LKIM processing during peak activity for each workload. Without any workload on the target kernel, LKIM takes just under 1 second to inspect nearly 18,000 kernel variables. Inspection of VFS accounts for the majority of this time, with SELinux and other variable inspections taking approximately 260 ms. Under Webstone, the number of variables increases only slightly, but with Linux build the number increases to just over 36,000 variables. In each case, the increase in variables is due to an increase in dynamically created data structures within VFS.
The impact of measurement on the target kernel can be regulated by adjusting the measurement frequency. FIG. 7(b) shows how target performance is affected by LKIM processing with the measurement interval fixed at 2 minutes. For each workload, the relative performance is shown for both Native and Xen configurations.
The performance results show where improvements in efficiency would make the best gains for the Xen architecture. The biggest improvement would be to reduce the number of variables measured. Currently, LKIM assumes all objects need to be inspected for each measurement run. A better approach would be to recognize which objects have been modified and only measure those. Xen provides a way to detect which pages have been dirtied by the target but the largest set of objects, the VFS nodes, are in a linked list. A more sophisticated algorithm would be needed to locate only the entries that have changed.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
What is claimed is:
1. A computer program product, comprising a non-transitory computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for measuring and verifying the integrity of a running computer program, the method comprising the step of examining the integrity of the running computer program's execution state comprising the steps of:
a. measuring the integrity of the running computer program's code during runtime;
b. measuring the integrity of the running computer program's data comprising a plurality of data objects accessed by the computer program during runtime, the measuring the integrity of the running computer program's data step comprising the steps of:
i. identifying the plurality of data objects using a plurality of attributes relevant to the running computer program's integrity to produce a baseline of the plurality of data objects from a stored image of the running computer program;
ii. measuring an image of the running computer program in a memory without modifying the running computer program to produce a measurement manifest comprising the steps of:
a. inspecting the identified plurality of data objects;
b. generating an abstract of an object graph for each data object; and
c. using the abstracts of the object graphs to produce the measurement manifest; and
iii. comparing the baseline and the measurement manifest to verify the integrity of the running computer program's data; and
c. inserting a trigger in the running computer program whose integrity has been measured to independently measure and verify the integrity of a new module before the new module is loaded into the memory.
2. The method of claim 1, further comprising the steps of:
a. computing the security relevant attributes of the new module based on a stored image and the new module's target location in memory; and
b. entering the computed attributes in a baseline and adding the static objects of the new module to a list of items being measured.
3. The method according to claim 1, further comprising the step of producing a baseline for the new module comprising the step of:
a. computing a hash of a text of the new module in the memory from the stored image of the new module and the location in the memory where the new module is being loaded.
4. A computer program product, comprising a non-transitory computer usable medium having a computer readable program code embodied therein, said computer readable program code instructing a microprocessor to implement a method for measuring and verifying the integrity of a computer program and modules being loaded from a stored location into a memory comprising the steps of:
a. calculating an image of the computer program in the memory using an image of the computer program in the stored location, the relevant runtime information and knowledge of how the computer program will be loaded into the memory;
b. comparing, using the microprocessor, an image of the computer program in the memory with the calculated image of the computer program in the memory; and
c. using the comparison to verify the integrity of the computer program in the memory.
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/7904278", "len_cl100k_base": 15891, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 22940, "total-output-tokens": 16841, "length": "2e13", "weborganizer": {"__label__adult": 0.00035691261291503906, "__label__art_design": 0.0003509521484375, "__label__crime_law": 0.0007739067077636719, "__label__education_jobs": 0.000644683837890625, "__label__entertainment": 8.565187454223633e-05, "__label__fashion_beauty": 0.00014197826385498047, "__label__finance_business": 0.0005583763122558594, "__label__food_dining": 0.0002722740173339844, "__label__games": 0.0011758804321289062, "__label__hardware": 0.0032196044921875, "__label__health": 0.0003485679626464844, "__label__history": 0.0003204345703125, "__label__home_hobbies": 0.00011050701141357422, "__label__industrial": 0.0005788803100585938, "__label__literature": 0.00028014183044433594, "__label__politics": 0.00028324127197265625, "__label__religion": 0.00033736228942871094, "__label__science_tech": 0.10833740234375, "__label__social_life": 6.341934204101562e-05, "__label__software": 0.03558349609375, "__label__software_dev": 0.84521484375, "__label__sports_fitness": 0.00019431114196777344, "__label__transportation": 0.0005168914794921875, "__label__travel": 0.00016200542449951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 81501, 0.02866]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 81501, 0.77191]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 81501, 0.92281]], "google_gemma-3-12b-it_contains_pii": [[0, 1368, false], [1368, 1368, null], [1368, 1961, null], [1961, 1961, null], [1961, 1970, null], [1970, 1970, null], [1970, 1982, null], [1982, 2674, null], [2674, 2699, null], [2699, 9578, null], [9578, 17105, null], [17105, 24538, null], [24538, 31767, null], [31767, 39043, null], [39043, 46560, null], [46560, 52223, null], [52223, 59937, null], [59937, 67569, null], [67569, 75008, null], [75008, 81501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1368, true], [1368, 1368, null], [1368, 1961, null], [1961, 1961, null], [1961, 1970, null], [1970, 1970, null], [1970, 1982, null], [1982, 2674, null], [2674, 2699, null], [2699, 9578, null], [9578, 17105, null], [17105, 24538, null], [24538, 31767, null], [31767, 39043, null], [39043, 46560, null], [46560, 52223, null], [52223, 59937, null], [59937, 67569, null], [67569, 75008, null], [75008, 81501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 81501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 81501, null]], "pdf_page_numbers": [[0, 1368, 1], [1368, 1368, 2], [1368, 1961, 3], [1961, 1961, 4], [1961, 1970, 5], [1970, 1970, 6], [1970, 1982, 7], [1982, 2674, 8], [2674, 2699, 9], [2699, 9578, 10], [9578, 17105, 11], [17105, 24538, 12], [24538, 31767, 13], [31767, 39043, 14], [39043, 46560, 15], [46560, 52223, 16], [52223, 59937, 17], [59937, 67569, 18], [67569, 75008, 19], [75008, 81501, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 81501, 0.09964]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
a361ba8f1988c27dac4e5a292db78c4160ca7330
|
[REMOVED]
|
{"Source-Url": "https://hal.science/hal-01418919/document", "len_cl100k_base": 14535, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 66304, "total-output-tokens": 16732, "length": "2e13", "weborganizer": {"__label__adult": 0.0004224777221679687, "__label__art_design": 0.0005035400390625, "__label__crime_law": 0.0004711151123046875, "__label__education_jobs": 0.0012292861938476562, "__label__entertainment": 8.869171142578125e-05, "__label__fashion_beauty": 0.0002014636993408203, "__label__finance_business": 0.0002849102020263672, "__label__food_dining": 0.00047516822814941406, "__label__games": 0.0010042190551757812, "__label__hardware": 0.001216888427734375, "__label__health": 0.000835418701171875, "__label__history": 0.0003597736358642578, "__label__home_hobbies": 0.00018393993377685547, "__label__industrial": 0.0006213188171386719, "__label__literature": 0.000431060791015625, "__label__politics": 0.0003528594970703125, "__label__religion": 0.000652313232421875, "__label__science_tech": 0.07891845703125, "__label__social_life": 0.00012683868408203125, "__label__software": 0.00685882568359375, "__label__software_dev": 0.9033203125, "__label__sports_fitness": 0.0003490447998046875, "__label__transportation": 0.0007810592651367188, "__label__travel": 0.00023996829986572263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53021, 0.01927]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53021, 0.54738]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53021, 0.79091]], "google_gemma-3-12b-it_contains_pii": [[0, 1054, false], [1054, 3942, null], [3942, 7677, null], [7677, 11109, null], [11109, 13924, null], [13924, 17593, null], [17593, 21423, null], [21423, 25718, null], [25718, 30455, null], [30455, 33543, null], [33543, 37896, null], [37896, 42404, null], [42404, 45814, null], [45814, 49497, null], [49497, 53021, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1054, true], [1054, 3942, null], [3942, 7677, null], [7677, 11109, null], [11109, 13924, null], [13924, 17593, null], [17593, 21423, null], [21423, 25718, null], [25718, 30455, null], [30455, 33543, null], [33543, 37896, null], [37896, 42404, null], [42404, 45814, null], [45814, 49497, null], [49497, 53021, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53021, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53021, null]], "pdf_page_numbers": [[0, 1054, 1], [1054, 3942, 2], [3942, 7677, 3], [7677, 11109, 4], [11109, 13924, 5], [13924, 17593, 6], [17593, 21423, 7], [21423, 25718, 8], [25718, 30455, 9], [30455, 33543, 10], [33543, 37896, 11], [37896, 42404, 12], [42404, 45814, 13], [45814, 49497, 14], [49497, 53021, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53021, 0.04425]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
279b09e212a14ef12a662b508c4167b42fe6e28c
|
Abstract—Automated program repair has been used to provide feedback for incorrect student programming assignments, since program repair captures the code modification needed to make a given buggy program pass a given test-suite. Existing student feedback generation techniques are limited because they either require manual effort in the form of providing an error model, or require a large number of correct student submissions to learn from, or suffer from lack of scalability and accuracy.
In this work, we propose a fully automated approach for generating student program repairs in real-time. This is achieved by first re-factoring all available correct solutions to semantically equivalent solutions. Given an incorrect program, we match the program with the closest matching refactored program based on its control flow structure. Subsequently, we infer the input-output specifications of the incorrect program’s basic blocks from the executions of the correct program’s aligned basic blocks. Finally, these specifications are used to modify the blocks of the incorrect program via search-based synthesis.
Our dataset consists of almost 1,800 real-life incorrect Python program submissions from 361 students for an introductory programming course at a large public university. Our experimental results suggest that our method is more effective and efficient than recently proposed feedback generation approaches. About 30% of the patches produced by our tool Refactory are smaller than those produced by the state-of-art tool Clara, and can be produced given fewer correct solutions (often a single correct solution) and in a shorter time. We opine that our method is applicable not only to programming assignments, and could be seen as a general-purpose program repair method that can achieve good results with just a single correct reference solution.
Index Terms—Program Repair, Programming Education, Software Refactoring
I. INTRODUCTION
Program repair is an emerging technology that seeks to rectify program errors automatically, thereby meeting a correctness criterion, such as passing a test-suite. Besides improving programmer productivity, this technique can be applied to programming education. Particularly, program repair has been applied to automated grading [1], and providing hints about program errors [2]. In this work, we propose a repair method where an incorrect program can be repaired with the help of one or more correct reference solutions. While our approach is general-purpose, in our experiments, we focus on generating repair-based feedback for incorrect programming assignments.
Program repair has previously been used to provide feedback on incorrect student submissions for programming assignments [1]–[7]. The programming assignments are usually segments of code, so the limited scalability of existing program repair techniques is not a concern. However, it has been observed from a corpus of programming assignments that student submissions are often severely incorrect [1]. This is in stark contrast to the “competent programmer hypothesis” that assumes code bases are largely correct. Since programming assignments are written by novice programmers and can be substantially erroneous, they are a testbed to validate the effectiveness of program repair techniques. Since the submissions for programming assignments are often incorrect, the search space of edits to be navigated for program repair can be very large, even though the program might be small.
Existing systems that repair incorrect programming assignments have significant drawbacks because of the manual effort involved, underlying assumptions about the availability of correct solutions, and scalability or accuracy concerns. Approaches like Autograder [5] assume the availability of an error model that has to be provided manually. Efforts like skp [8] rely on neural networks to correct programs and suffer from low precision; a recent work has extended neural reasoning with symbolic analysis [6]. However, the accuracy of repairs typically remains low in such efforts. Refazer [7] learns program transformation schema from past submissions and its performance critically depends on the quality and quantity of corpus available. The recent works of Clara [3] and Sarfgen [4] compare an incorrect assignment with an available correct assignment. Such approaches assume the availability of a large number and diversity of correct solutions. However, this assumption often does not hold in practice, e.g. when a newly crafted assignment is given by an instructor.
Technical Contribution: The main technical contribution of this paper is a fully automated program repair method for repairing incorrect student submissions for programming assignments. While our technique can exploit the availability
of a large number of correct solutions to perform better, we only assume and require one correct reference solution. Our approach is to use re-factoring rules to generate a correct solution with the same control flow as the incorrect program. Since the buggy program and the re-factored correct program possess the same (or similar) control flow, we compare their basic blocks and generate candidate variable mappings between the two programs based on dynamic observations over test executions and static analysis. Given such a variable mapping, we formulate the program repair problem as judiciously synthesizing expressions at selected basic blocks to meet the given correctness criterion (such as passing a test-suite). This synthesis problem is solved by efficient search-based synthesis where a large space of expressions is efficiently navigated to construct minimal repairs. The expressions considered for repairs of the basic blocks are obtained from expression templates or by mutating existing expressions. Our Refactory tool implementation of the above approach has been made available at https://github.com/githubuyang/refactory.
Conceptual Contribution and Results: If we envision the feedback generation problem through means of automated program repair as one of search space construction and traversal (with the search space capturing the possible edits of the buggy program), our solution enables a novel way to present and understand this search space. This is the main conceptual contribution of the work, and we believe this also leads to more superior experimental results as evidenced by our repair tool for actual Python programs from a real student submission data set. By separating the control flow matching (obtained via refactoring) from data-flow matching (achieved via search-based synthesis), we can construct small legible program repairs to be used as feedback to the students. We evaluate our approach on a large data set of 1,783 buggy student programs, that was curated from five different Python assignments offered during a first-year university course credited by 361 students. Our tool Refactory achieves a higher repair rate, smaller patch size and less overfitting when compared to state-of-the-art tools such as Clara [3]. To verify the generality of our approach and crafted refactoring rules, we randomly sample an additional six assignments containing 7,290 buggy student programs and observe similar results (Section VI). In addition to the practical utility of our technique in feedback generation, we believe that our viewpoint of cleanly partitioning the search-space of edits by separating control flow matching from expression synthesis can be useful for automated program repair.
II. Overview
Fig. 1 gives a high-level overview of our approach. Our approach takes three inputs: a test-suite \( T \), a buggy program \( P_b \) and (one or more) correct programs \( C \). Our approach includes three phases, which are elaborated in the following.
Phase 1. Refactoring: Given a set of refactoring rules, we conduct software refactoring on correct programs \( (C) \) to generate additional correct programs with new control flow structures. For example, Fig. 2a shows a correct program for the programming assignment sequential search, which
outputs how many numbers in a sorted number sequence \( \text{seq} \) are smaller than \( x \). To generate a correct program with new control flow, we mutate the control flow of the correct program by adding an empty else branch to an if branch. The refactored correct program is shown in Fig. 2b.
Phase 2. Structure Alignment: We perform structure matching, for finding refactored correct programs which have the same control flow structure with the buggy program \( P_b \). If we cannot find such programs, \( P_b \) may have bugs in its control flow. To fix such bugs, we conduct structure mutation, which edits the control flow structure of \( P_b \) to that of closest refactored correct program in terms of tree edit distance.
Phase 3. Block Repair: Among all correct programs which have the same control flow structure with the buggy program, we search for the correct programs which are the top-k closest to the buggy program \( P_b \) (we set \( k = 5 \) in our experimental evaluation). For any of these top-k closest programs, if we can construct a patch passing the given test-suite \( T \), we have succeeded in repairing, and hence generating feedback.
Phase 3.1 Block Mapping: We build a mapping between basic blocks in a correct program \( P_c \) and those of \( P_b \) based on the graph isomorphism of the control flow graph of \( P_c \) and \( P_b \). For example, consider the buggy program in Fig. 2c and the refactored correct program in Fig. 2b, where line 2,3,4,6,7 are different basic blocks (although line 7 in the buggy program is empty, we regard it as an empty basic block). Assume that \( B^c_i \) is the basic block in line \( i \) in the refactored correct program, and \( B^b_i \) is the basic block in line \( i \) in the buggy program. Based on graph isomorphism, we can get \( \{ B^c_i \mapsto B^b_i \}, i \in \{2,3,4,6,7\} \).
Phase 3.2 Variable Mapping: We build a variable mapping between the correct program \( P_c \) and the buggy program \( P_b \) using dynamic equivalence analysis (DEA) [3] and define/use analysis (DUA). In DEA, we collect the trace of each variable when running the correct and the buggy programs, and then map two variables if they take the same values in the same order when running the same test. For variables that are not mapped by DEA, we apply DUA, which maps two variables if the blocks where the first variable is defined/used corresponds.
to the blocks where the second variable is defined/used. To illustrate these approaches, consider building a variable mapping between the buggy program in Fig. 2c and the refactored correct program in Fig. 2b using the tests search(2,[1,2,3]) and search(3,[4,5,6]). Table Ia and Table Ib show all the variable traces collected using DEA. Since the traces of e and x are the same, and the traces of lst and seq are the same, we get a variable mapping \{e \mapsto x, lst \mapsto seq\}. Note that j and i are not mapped by DEA because their traces are different. Then, we execute DUA that identifies that j and i are defined in line 2 and used in line 3 and line 4, and the basic blocks in line 2, 3, 4 in the buggy program correspond to the basic block in line 2, 3, 4 in the correct program. Thus, we map j to i, and finally obtain the variable mapping \{e \mapsto x, lst \mapsto seq, j \mapsto i\}.
**Phase 3.3 Specification Inference**: We generate a specification for each basic block in the buggy program \(P_b\) by (1) collecting inputs and outputs of each basic block in the correct program and (2) using the variable mapping to translate them into the inputs and expected outputs of each basic block in the buggy program. For instance, consider the buggy basic block \(e=lst[j]\) in the buggy program. We collect the inputs and outputs of its corresponding basic block \(x=seq[i]\) in the refactored correct program (Table Ic). Then, we replace the variables using the variable mapping \{e \mapsto x, lst \mapsto seq, j \mapsto i\} to generate the specification of \(e=lst[j]\).
**Phase 3.4 Block Patch Synthesis**: We use the input-output specification derived in the previous step to check the correctness of each basic block in the buggy program. In the buggy program shown in Fig. 2c, the basic blocks in lines 3, 6 and 7 do not satisfy their input-output specifications, and hence we deem them to be in need for repair. We attempt to generate a patch for each incorrect basic block in the buggy program.
If the basic block in \(P_b\) is empty, we fix it based on the variable mapping and its corresponding basic block in the correct program \(P_c\). For example, consider the buggy basic block in line 7, which is an empty basic block. Its corresponding basic block in the refactored correct program is \(return len(seq)\). Based on the variable mapping \(\{e \mapsto x, lst \mapsto seq, j \mapsto i\}\), we replace the empty basic block in line 7 of the buggy program with a fixed basic block \(return len(lst)\).
If the basic block in \(P_b\) is not empty, while its corresponding basic block in \(P_c\) is empty, we fix it by making it empty. For example, consider the incorrect basic block in line 6. Its corresponding basic block in the refactored correct program is \(pass\), which is a key word to show it is an empty basic block. We fix the basic block in line 6 of the buggy program to \(pass\).
If the basic block in \(P_b\) and its corresponding basic block in \(P_c\) are both non-empty, then we synthesize a patch for the buggy basic block using its specification. Given a set of suspicious lines in a buggy basic block, we insert holes to produce a partial program. Then, we perform enumerative synthesis with test-equivalence analysis [9] to fill the holes in the partial program. We use two heuristics to generate expression candidates. First, we utilize expression templates (i.e., syntax patterns [10] of expressions) in correct programs. For example, given the expression \(x=seq[i]\) in the refactored correct program, we can extract an expression template \(v_0=v_1[v_2]\) where \(v_0, v_1, v_2\) are free variables. Using this template, we can generate a candidate \(e=lst[j]\). We also generate expression candidates by mutating operators or variables of the expressions in the buggy program. For example, given the expression \(e=lst[j]\) in the buggy program, we generate candidates \(e=lst[j], e=lst[i], e=lst[j], and j=lst[j]\).
Once the search space of candidate expressions is constructed, we traverse them efficiently using an approach based on test-equivalence analysis [9]. In this approach, the candidate expressions are grouped together if they behave identically on the given input-output examples (these are the specification we derived earlier). Such an approach greatly contributes to the scalability of our technique, since it helps to avoid traversing and checking the candidate patches one by one.
After generating patches for each basic block, we combine them into a global patch and validate its correctness via the test-suite. Fig. 2d shows the fixed program.
### III. REFACTORING AND STRUCTURE MUTATION
In this section, we introduce refactoring rules for mutating the control flow structure of existing correct solutions to generate new semantically equivalent correct solutions with different control flow structures. This step is necessary since the accuracy of repairing a given buggy program depends on finding a correct program with similar control flow structure.
We designed generic rules based on the observation that the same algorithm can have syntactically different implementations. For example, although the two programs in Fig. 3 behave equivalently and contain the same basic blocks, the control flow structure of
```python
1 def search(x, seq):
2 for i in range(len(seq)):
3 if x <= seq[i]:
4 return i
5 else:
6 return len(seq)
7 pass
1 def search(e, lst):
2 for j in range(len(lst)):
3 if e < lst[j]:
4 return j
5 else:
6 return len(lst)
7 pass
(a) A correct program
(b) A refactored correct program
(c) A buggy program
(d) A fixed program
```
**Fig. 2**: Example programs of the sequential search programming assignment from our dataset.
since the control flow of program $CF_x$ (resp. $CF_y$) matches with the control flow of rule $CF_a$ (resp. $CF_b$).
2) Conditional statement with conjunction: An if statement with the condition being a conjunction of $C_1$ and $C_2$ (Fig. 4c) can be rewritten as a nested if structure, containing the conditions $C_1$ and $C_2$ individually (Fig. 4d), using rule $R_{A_3}$.
B. New conditional transformations
These set of rules introduce additional guards; either around arbitrary statements, or around existing conditionals.
1) Introduce new if conditionals: In this rule $R_{B_1}$, we introduce three types of if conditional blocks. Fig. 4f adds a trivially true conditional guard around an arbitrary node $S$. Fig. 4g introduces a trivially false conditional guard around an arbitrary block $B^*_1$. Fig. 4h introduces an arbitrary condition $C^*_1$ around a pass (no-op) statement.
The arbitrary block $B^*_1$ (respectively condition $C^*_1$) are placeholders which can match and copy any corresponding block (resp. condition) of incorrect program, during the block mapping phase of our approach described later in Section IV-A.
2) Introduce new Elif/Else branch: The rule $R_{B_1}$ introduces Elif and Else branching statements to an existing if conditional statement. Fig. 4j adds a trivially false Elif branch containing arbitrary block $B^*_1$. Fig. 4k introduces an arbitrary $C^*_1$ conditional Elif branch, around a pass (no-op) statement. Fig. 4l adds an Else branch containing pass statement.
C. Loop guards
These set of rules deal with introducing additional guards surrounding an existing loop structure.
1) Introduce guard around For loop: Programs containing for statement which loops over an iterator (such as list) can be mutated into a new program structure by introducing guards around the loop, targeting the case when iterator is empty (Fig 4m to Fig 4n), or non-empty (Fig 4m to Fig 4o).
2) Introduce guard around while loop: Similar to previous rule, guards can be introduced in programs which loop over an iterator using while loop, targeting the case when iterator is empty (Fig 4p to Fig 4q), or non-empty (Fig 4p to Fig 4r).
D. While loop transformations
These set of rules replace while loop structure with an equivalent conditional jump statement, or vice-versa.
1) Conditional break inside while loop: A program which loops until a condition $C_1$ is satisfied (Fig. 4s) can be refactored into another program which loops indefinitely, with a $C_1$ conditional break instruction inside the loop’s body (Fig. 4t).
Category A: Existing conditional transformations
Predecessor $P$
if $Condition\ C_1$:
Basic Block $B_1$
Jump Instruction $J_1$
dead:
Successor $S$
(a)
Rule $R_{A1}$: Successor statements to a conditional jump.
Predecessor $P$
if $Condition\ C_1$:
Basic Block $B_1$
Jump Instruction $J_1$
else:
Successor $S$
(b)
Rule $R_{A2}$: Conditional expression with conjunction.
Category B: New conditional transformations
$P$
if $True$:
Successor $S$
(e)
Rule $R_{B1}$: Introduce new if conditionals.
$P$
if $False$:
$B_1^*$
pass
Successor $S$
(f)
Rule $R_{B2}$: Introduce new elif/else branch.
$P$
if $C_1$:
$B_1$
S
(g)
Rule $R_{B3}$: Introduce guard around for loop.
$P$
if $C_1$:
$B_1$
pass
Successor $S$
(h)
Rule $R_{B4}$: Introduce guard around while loop.
$P$
if $C_1$:
$B_1$
Return $R_1$
Successor $S$
(i)
Rule $R_{B5}$: Unconditional return inside while.
$P$
if $C_1$:
$B_1$
Break
Successor $S$
(j)
Rule $R_{B6}$: Unconditional break inside while.
$P$
while $C_1$:
$B_1$
Break
Successor $S$
(s)
Rule $R_{B7}$: Conditional break inside while.
Category C: Loop guards
$P$
if $len(I_1) > 0$;
for $i$ in $I_1$:
$B_1$
S
(m)
Rule $R_{C1}$: Introduce guard around for loop.
$P$
if $len(I_1) == 0$;
else:
pass
for $i$ in $I_1$:
$B_1$
S
(n)
Rule $R_{C2}$: Introduce guard around while loop.
$P$
if $len(I_1) > 0$;
while $i < len(I_1)$:
$B_1$
S
(o)
$P$
if $len(I_1) == 0$;
else:
pass
while $i < len(I_1)$:
$B_1$
S
(p)
$P$
while $i < len(I_1)$:
$B_1$
S
(q)
$P$
while $i < len(I_1)$:
$B_1$
S
(r)
Rule $R_{C3}$: Introduce guard around while loop.
Category D: While loop transformations
$P$
while $C_1$:
$B_1$
S
(s)
Rule $R_{D1}$: Conditional break inside while.
$P$
while True:
if Not $C_1$:
$B_1$
S
(t)
Rule $R_{D2}$: Unconditional return inside while.
$P$
while $C_1$:
if $C_1$:
$B_1$
S
(u)
Rule $R_{D3}$: Unconditional break inside while.
$P$
while $C_1$:
if $C_1$:
$B_1$
S
(v)
Rule $R_{D4}$: Unconditional return inside while.
$P$
while $C_1$:
if $C_1$:
$B_1$
S
(w)
Rule $R_{D5}$: Conditional break inside while.
Category E: Loop unrolling
Predecessor $P$
for $i$ in $I_1$:
$B_1$
S
(y)
Rule $R_{E1}$: Split a for loop by iterator slicing.
$P$
for $i$ in Slice($I_1$, 0, $\lfloor\text{len}(I_1)/2\rfloor$):
$B_1$
S
(z)
Fig. 4: List of refactoring rules
2) Unconditional return inside while loop: A while loop which contains an unconditional return jump (Fig. 4u) can be replaced with an equivalent if conditional statement (Fig. 4v), since the block of statements inside the loop will be executed only once on successful satisfaction of the loop guard.
3) Unconditional break inside while loop: Similar to the previous rule, a while loop which contains an unconditional break jump (Fig. 4w) can be replaced with an equivalent if conditional statement without the break statement (Fig. 4v).
E. Loop unrolling
1) Iterator slicing: A for loop iterating over a sequence (Fig. 4y), can be split into two loops that iterate over two distinct sets of consecutive elements (Fig. 4z). The operator slice([i1, i2]) returns a subsequence of the elements starting of I at index i1 until index i2.
F. Structure Mutation
Given a buggy program $P_b$, we first search for a program $P_c$ from the set of correct student submissions and their refactored variants, such that $P_c$ has the same control-flow structure as $P_b$. If no such match is found, we attempt structure mutation that modifies the control-flow structure of the buggy program $P_b$. First, it searches for the closest program $P'_c$ wrt control-flow structure from the set of correct programs and their refactored variants. Then, it borrows a minimal number of control-flow nodes (such as if-conditional or loop statements) from $P'_c$ into $P_b$, in order to make their structure isomorphic.
Unlike refactoring rules, which mutates the control-flow structure of correct programs while preserving semantic equivalence, structure mutation does not offer such guarantee.
IV. BLOCK REPAIR
Given a correct program $P_c$ that has the same control flow structure as the buggy program $P_b$, we execute block repair algorithm to repair $P_b$. The algorithm consists of four stages. First, we construct a block mapping based on the isomorphism of the two control flow graphs (CFG). Second, we find a mapping between the variables of $P_b$ and $P_c$. Third, we infer a correct specification for each basic block in $P_b$ from $P_c$. Finally, we synthesize a patch for each basic block of $P_b$, and combine all block patches into a global patch.
A. Block Mapping
The goal of this stage is to find a mapping between the basic blocks of $P_b$ and those of $P_c$. Since $P_b$ and $P_c$ have the same control-flow structure, their control flow graphs are isomorphic. Thus, a block mapping is effectively an isomorphism between the two control flow graphs.
Definition 1. Block Mapping. Let $\mathcal{G}(P_c)$ be CFG of $P_c$ with nodes $\{B^c_i\}_{i \in 1..n}$ and $\mathcal{G}(P_b)$ be CFG of $P_b$ with nodes $\{B^b_i\}_{i \in 1..n}$. We define a block mapping $\mathcal{B}(P_c, P_b)$ as a CFG isomorphism $\{B^c_i \mapsto B^b_{j_i}, ... , B^c_n \mapsto B^b_{j_n}\}$ between $\mathcal{G}(P_c)$ and $\mathcal{G}(P_b)$, where $\{j_1, ..., j_n\}$ are different indexes from 1 to n.
B. Variable Mappiing
The purpose of variable mapping is to identify how variables of $P_c$ correspond to the variables of $P_b$.
Definition 2. Variable Mapping. Let $x_1, ..., x_m$ be variables of the correct program $P_c$, and $y_1, ..., y_n$ be variables of the buggy program $P_b$. Then, $\{x_{i1} \mapsto y_{j_1}, ..., x_{i_n} \mapsto y_{j_n}\}$ is a mapping of variables if $i_1, ..., i_n \in 1..n$ are different indices and $j_1, ..., j_s \in 1..m$ are different indices.
Since there exist many possible mappings, we apply Dynamic Equivalence Analysis (DEA) and Define/Use Analysis (DUA) to filter out irrelevant mappings. In DEA, we collect the variable traces collected on $P_b$ and $P_c$. The trace of a variable in a test refers to the sequence of values that the variable takes during the test execution.
The intuition behind DEA is that if a variable $x$ in $P_c$ takes the same values in the same order as a variable $y$ in $P_b$ during each test execution, then they represent the same user intent. In this case, we say $y$ is a variable candidate of $x$.
Definition 3. Mapped Variable Candidates in DEA. Let $x$ be a variable in $P_c$, $y$ be a variable in $P_b$, and $T$ be a set of tests. $M_{DEA}(x)$ represents a set of variable candidates in $P_b$ that $x$ can be mapped to. We define $y \in M_{DEA}(x)$ iff for each test $t \in T$, the sequence of values that $y$ takes during the execution of $P_b$ with $t$ is the same as the sequence of values that $x$ takes during the execution of $P_c$ with $t$.
In DUA, we assume that variables that are defined and used in the same manner are more likely to have the same user intent. We get a set of variable candidates in $P_b$ which a variable in $P_c$ can be mapped to as follows.
Definition 4. Mapped Variable Candidates in DUA. Let $\mathcal{D}(P, x)$ be the set of basic blocks in the program $P$ where the variable $x$ is defined, $\mathcal{U}(P, x)$ be the set of basic blocks in the program $P$ where the variable $x$ is used, $r$ be a variable in $P_c$ and $s$ be a variable in $P_b$. $M_{DUA}(r)$ represents a set of variable candidates in $P_b$ that $r$ can be mapped to in DUA. We define $s \in M_{DUA}(r)$ iff (1) there exists a one-one block mapping from $\mathcal{D}(P_c, r)$ to $\mathcal{D}(P_b, s)$ and (2) there exists a one-one block mapping from $\mathcal{U}(P_c, r)$ to $\mathcal{U}(P_b, s)$.
Finally, we rule out all invalid variable mappings that do not map the variable $r$ in $P_c$ to any variable candidates in $P_b$.
Definition 5. Valid Variable Mapping. Let $\{c_{i1} \mapsto b_{j_1}, ..., c_{i_n} \mapsto b_{j_n}\}$ be a variable mapping between $P_c$ and $P_b$. We say that the variable mapping is invalid if and only if $\exists r \in 1..n : (b_{j_r} \not\in M_{DEA}(c_{i_r}) \cup M_{DUA}(c_{i_r}))$
If no variable mapping is valid, the block repair algorithm will report a repair failure on $P_b$. Otherwise, we enumerate all valid variable mappings one-by-one until the algorithm successfully infers a specification and synthesizes a patch.
C. Specification Inference
First, we analyze the correct program to extract a specification for each basic block. This is done by running $P_c$
on our test-suite \( T \) to collect input-output state pairs of each basic block in \( P_c \). Here, input state refers to the values of all variables before executing the basic block, and output state refers to these values after executing the basic block.
**Definition 6. Specification.** Let \( B \) be a basic block in a program \( P \), and \( T \) be the test-suite. The specification of \( B \) is defined as a set of input-output state pairs \( \{ (I_j, O_j) \}_{j \in 1, \ldots, r} \), where \( I_j \) denotes the input state and \( O_j \) denotes the expected output state of \( B \) given \( I_j \) as the input state.
Note that in the above definition we use a set of state pairs because each basic block can be executed multiple times during a test execution. Our algorithm infers a specification of a basic block \( B_c \) based on that of its corresponding basic block \( B \) and the valid variable mapping \( M \).
**Definition 7. Specification Inference.** Let \( B \) be a basic block in the correct program \( P_c \), and \( \{ (I^c_j, O^c_j) \}_{j \in 1, \ldots, r} \) be a specification of \( B_c \), and \( B_b \) be the corresponding basic block in the buggy program \( P_b \), and \( M \) be a valid variable mapping between \( P_c \) and \( P_b \). A specification of \( B_b \) is inferred as a set of input-output state pairs \( \{ (I^b_j, O^b_j) \}_{j \in 1, \ldots, r} \) such that
\[
\forall j, \exists x. (\{ u \mapsto x \} \in I^b_j) \land (\{ x \mapsto y \} \in M) \\
\{ y \mapsto u \} \in O^b_j,
\]
\[
\forall j, \exists x. (\{ u \mapsto x \} \in I^b_j) \land (\{ x \mapsto y \} \in M) \\
\{ y \mapsto u \} \in O^c_j.
\]
**D. Patch Synthesis**
Before repairing a basic block \( B_b \) in \( P_b \), we verify the correctness of \( B_b \) by collecting the inputs and their corresponding outputs of \( B_b \) and comparing them with the inputs and expected outputs in its inferred specification. Formally speaking, we run \( P_b \) on the test-suite \( T \) to collect a set of input-output pairs \( \{ (I^b_j, O^b_j) \}_{j \in 1, \ldots, r} \) of \( B_b \). We say \( B_b \) is incorrect if there exist \( I^b_j, O^b_j \) and \( I^c_j, O^c_j \) such that \( I^b_j \) \( \neq \) \( I^c_j \) \( \land \) \( O^b_j \) \( \neq \) \( O^c_j \).
For \( B_b \) is incorrect, we attempt to repair it. If either \( B_c \) or \( B_b \) is an empty basic block, we fix it either by generating an empty block as a patch of \( B_b \), or using the valid variable mapping to translate \( B_c \) to a patch of \( B_b \). In other words, we replace all variable names in \( B_c \) with their corresponding variable names according to the valid variable mapping.
If \( B_c \) and \( B_b \) are not empty, we use a program synthesis technique to generate a patch for \( B_b \) based on its specification. Given a set of suspicious lines, we produce a partial program with holes inserted in buggy lines. We generate expression candidates for each hole. Our goal is to fill holes with expressions that enable the block to satisfy the specification.
**Definition 8. Block Patch Synthesis.** Let \( B \) be an incorrect basic block and \( L \) be a set of suspicious buggy lines in \( B \). Let \( \mathcal{P}(B, L) \) be a partial block that holes are inserted into all lines in \( L \). Let \( S_I \) be a set of expressions candidates for the hole in line \( l \in L \). Let \( C : S_I \times \cdots \times S_I \times L \rightarrow \mathbb{R} \) be a cost function. Our aim is to find a repair \( \{ s_1, \ldots, s_n \} \in S_I \times \cdots \times S_I \) that (i) can fill in \( \mathcal{P}(B, L) \) to pass the correct specification and (ii) \( C(s_1, \ldots, s_1, L) \) be minimal among all such basic blocks.
Typically, program repair techniques identify suspicious lines via statistical fault localization. Considering such techniques may not be accurate on students’ submissions which are usually severely incorrect, we enumerate the all subsets of lines in \( B_b \) as sets of suspicious buggy lines in the ascending order of the number of lines until we find a patch.
A simple approach to generate a patch is to enumerate all block candidates by filling in holes in the partial block with all combinations of expressions. However, the search space might be huge, suffering from a combinatorial explosion as the number of holes grows. To mitigate this issue, we perform test-equivalence analysis [9] when searching for a patch. In a nutshell, we partition candidates into test-equivalence classes. For each class, only one representative patch is executed and verified, thereby reducing the number of test executions.
**Definition 9. Test-Equivalence Relation.** Let \( \mathbb{B} \) be a set of block candidates, and \( \alpha \) be an input-output pair in the correct specification of \( B \). An test-equivalence relation on \( \alpha \) is defined as an equivalence relation \( \leftrightarrow_{\alpha} \subseteq \mathbb{B} \times \mathbb{B} \) that if \( B_1 \leftrightarrow_{\alpha} B_2 \), then \( B_1 \) and \( B_2 \) both pass or fail \( \alpha \).
The search space of expression candidates for each hole is constructed based on expression templates and operator/variable mutation. An expression template is a syntax pattern [10] where variable names in the expression are abstracted away (i.e., use a set of wildcards instead of the variable names). Expression templates are extracted from expressions in correct programs. Formally, let \( e = (e_1, \ldots, e_n) \) be an expression, where \( e_i \) denotes \( i \)-th token of \( e \). Let \( V \) be a set of variable names. An expression template of \( e \) can be defined as a sequence of tokens \( (e'_1, \ldots, e'_n) \), where \( e'_i = * \) iff \( e_i \in V \). Given a set of variable names from the buggy program, a space of candidate expressions is generated by assigning each wildcard with a unique variable name.
We also generate a space of candidate expressions by mutating operators or variable names of the suspicious expressions from the buggy program. Let \( e = (e_1, \ldots, e_n) \) be an expression, where \( e_i \) denotes \( i \)-th token of \( e \). We construct a space of candidate expressions by generating \( e' = (e'_1, \ldots, e'_n) \), where \( e_j = e'_j \) when \( j \in 1, \ldots, k - 1, k + 1, \ldots, n \), and \( e'_k \) is such that \( e_k \neq e'_k \) and if \( e_k \) is a variable, then \( e'_k \) is another variable, and if \( e_k \) an operator, then \( e'_k \) is another operator.
V. DAtaSet AND EXPERIMENTAL SETUP
We choose Clara [3], one of the most recent and related feedback generation approach with publicly available implementation1, as the baseline to compare our approach against. Clara [3] was evaluated on a similar dataset as AutoGrader [5], which consists of student attempts from MITx MOOC [11]. However, this dataset is not publicly available.
Instead, we evaluate both Clara and our Refactory tool on real student submissions, collected from an introductory Python programming course offered at the author’s university (National University of Singapore). This course was credited by 361 students, who had to attempt a large number of programming assignments throughout the entire semester.
1https://github.com/iradicek/CLARA
Students were allowed to submit multiple attempts, only the last of which was graded. On each attempt, students received the test-suite evaluation results as feedback.
From these assignments, we filter out those attempts that contain syntax errors, or contain a single basic block (trivial assignments), or utilize Python language features unsupported by implementation of Refactory or Clara (such as lambda functions, exception handling, Object-Oriented Programming concepts). After filtering, 19 assignments remain, from which we selected 5 assignments for an initial evaluation and crafting our refactoring rules. Totally, 2,442 correct submissions and 1,783 incorrect student attempts form our dataset, along with the instructor designed test-suite and reference program. This dataset is described in Table II. Our dataset and Refactory repair tool are publicly released to aid further research2.
To test the generality of our refactoring rules and block repair algorithm, we report results on the remaining 14 assignments containing 6,448 correct submissions and 7,290 incorrect student attempts as well.
All experiments are conducted using Intel® Core™ i7-4770 CPU, 8GB RAM and Ubuntu 18.10. Clara has an offline phase for clustering the correct programs, for which we set a five-minute timeout per assignment. For the online phase, Clara and Refactory are configured to run in a single-thread mode with one-minute timeout to repair each incorrect submission.
### VI. EVALUATION
To evaluate the effectiveness of Refactory, we aim at answering the following research questions:
**RQ1** Given a large number of correct submissions, how effectively can Refactory repair incorrect submissions?
**RQ2** Given a small number of correct submissions, how effectively can Refactory repair incorrect submissions?
**RQ1 and RQ2** investigate the applicability of our approach to assignments with different number of correct submissions. Existing data-driven approaches such as Clara and Sarfgen are designed for assignments with a large number of correct submissions. We use refactoring rules to generate new correct submissions, which makes our approach applicable when only a small number of correct submissions is available.
To answer RQ1, we evaluate Refactory and Clara on the entire dataset of correct programs. To answer RQ2, we evaluate them on downsampled dataset, where the number of correct submissions provided as input to these tools is varied from 100% to 0% (only the reference program is used).
**Explanation of Table II:** Table II shows the results on our 5 assignments selected for initial evaluation. Clara can generate repairs for 71.28% of 1783 incorrect submissions consuming 13.6 seconds on average per repair. In comparison, Refactory can generate repairs for 90.8% of incorrect submissions requiring 5.5 seconds on average per repair. Which demonstrates that Refactory can repair a significantly larger % of incorrect submissions, while requiring lesser amount of time.
This high repair rate of 90.8% by Refactory is made possible by our refactoring and structure mutation phase. As seen from Table II, only 64.5% of incorrect programs have a matching correct submission with exactly the same control flow structure. By applying our refactoring rules, we generate new correct programs, thereby increasing the %CFG match to 81.4%. The remaining incorrect programs which do not have a CFG match with correct programs undergo structure mutation during online phase, bringing our overall repair rate to 90.8%. In comparison, almost a half of Clara’s failures occur due to exceeding the running timeout of 1 min. The remaining occur when Clara is unable to find a matching correct submission with the same looping structure as incorrect submission.
We also report on Relative Patch Size (RPS) metric, to further evaluate generated patches. Patch size is defined as the Tree-Edit-Distance (TED) between the Abstract Syntax Tree of given buggy program (AST_b) and repaired program (AST_r) generated by tool. Relative Patch Size (RPS), as defined by Clara [3], normalizes the patch size with the size of original buggy program’s AST. $RPS = TED(\text{AST}_b, \text{AST}_r)/\text{Size(\text{AST}_b)}$ As shown in Table II, repairs generated by Refactory have a smaller average RPS compared to those generated by Clara (for majority of incorrect attempts), which indicates that our repairs are smaller and hence more likely to help students in rectifying bugs in their incorrect attempts.
**Explanation of Figure 5:** Fig. 5a shows the average repair rate achieved by both tools for various sampling rate of correct submissions provided as input to the tools. The repair rate of Refactory is relatively consistent when the sampling rate is reduced while Clara’s repair rate drops significantly with decrease in sampling rate. For example, when sampling
### TABLE II: Results on five programming assignments. “% CFG Match” is the percentage of incorrect submissions for which correct submissions with matching control-flow structure are found without refactoring (W/O \( R \)) and with refactoring (W/ \( R \)). Repair rate, average time-taken and relative patch size per assignment are shown for Refactory (and for Clara in brackets).
<table>
<thead>
<tr>
<th>ID</th>
<th>Description</th>
<th>Avg. #Lines of Code</th>
<th>#Correct Attempt</th>
<th>#Incorrect Attempt</th>
<th>%CFG Match W/O ( R )</th>
<th>%CFG Match W/ ( R )</th>
<th>Repair Rate Avg. Time Taken (sec)</th>
<th>Relative Patch Size (RPS)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Sequential search</td>
<td>10</td>
<td>768</td>
<td>575</td>
<td>80.00%</td>
<td>86.78%</td>
<td>98.96% (81.91%)</td>
<td>3.5 (12.1)</td>
</tr>
<tr>
<td>2</td>
<td>Unique dates/months</td>
<td>28</td>
<td>291</td>
<td>435</td>
<td>33.33%</td>
<td>68.28%</td>
<td>78.16% (42.07%)</td>
<td>4.8 (17.4)</td>
</tr>
<tr>
<td>3</td>
<td>Duplicate elimination</td>
<td>7</td>
<td>546</td>
<td>308</td>
<td>87.34%</td>
<td>89.61%</td>
<td>97.40% (92.86%)</td>
<td>4.7 (8.5)</td>
</tr>
<tr>
<td>4</td>
<td>Sorting tuples</td>
<td>9</td>
<td>419</td>
<td>357</td>
<td>52.94%</td>
<td>81.23%</td>
<td>88.24% (64.43%)</td>
<td>8.7 (20.6)</td>
</tr>
<tr>
<td>5</td>
<td>Top-k elements</td>
<td>11</td>
<td>418</td>
<td>108</td>
<td>80.56%</td>
<td>83.33%</td>
<td>87.96% (93.52%)</td>
<td>13.1 (11.8)</td>
</tr>
<tr>
<td>1–5 overall</td>
<td></td>
<td>14</td>
<td>2442</td>
<td>1783</td>
<td>64.50%</td>
<td>81.44%</td>
<td>90.80% (71.28%)</td>
<td>5.5 (13.6)</td>
</tr>
</tbody>
</table>
2https://github.com/githubhuyang/refactory
Formally, let $b$ represent the RPS of a patch generated by a tool $t$ across all assignments. Then Gaussian Kernel Density Estimator [12] is used to generate their probability density functions based on individual observation values. We estimate the density function as $f(x) = \frac{1}{mh} \sum_{i=1}^{n} K\left(\frac{x - x_i}{h}\right)$, where $h$ is a smoothing parameter and $K$ represents a Gaussian kernel function. From Fig. 5c, the estimated density (y-axis) of patches for Refactory tool is higher compared to Clara’s when RPS (x-axis) is smaller than 0.9. In other words, the patches generated by our tool are concentrated towards small RPS.
Results on Full Data-set: To demonstrate that our manually crafted refactoring rules do not over-fit our initially selected five assignments, reported in Table II, we report additional results on the 14 held out assignments (refer Section V). On these 14 new assignments, our Refactory tool achieves repair rate of 71.65% on 7290 incorrect submissions within 6.4 seconds on average, and the generated repair’s average relative patch size is 0.44. In contrast, Clara can repair 30.4% of 7290 incorrect submissions in 15 seconds on average, with a relative patch size of 0.82. Furthermore, our refactoring rules can improve the overall $CFG$ match rate from 55.47% (W/O $R$) to 67.08% (W/ $R$). Overall, our tool achieves high accuracy with small patch size on the full set of 19 assignments.
VII. RELATED WORK
In this section, we briefly review existing state-of-the-art approaches targeting introductory programming assignments, and clarify the novelties provided by our approach.
A. Automated Program Repair
The field of automated program repair [13], where changes are suggested to the program source code for fixing observable errors and vulnerabilities, has witnessed an explosive growth in recent years. GenProg [14] uses search-based techniques to navigate the space of edits, so as to automatically find an edit where the edited program passes a given test-suite. Learning or pattern-based approaches have been successfully applied in program repair, e.g. finding patterns of human patches and using them in program repair [15], or using machine learning techniques to rank patch candidates [16].
SimFix [19] mines repairs from similar code and past patches. In principle, it can be applied to correct student assignments when a history of previous corrections and a sufficient number of similar solutions are available. In contrast, our approach is designed to work only a few correct solutions are available, without relying on the history of previous corrections. Instead, from a single correct solution we can generate several correct solutions one of which can match the control flow of the given buggy program, followed by which we resort to basic block synthesis. Thus, our approach...
def swap(lst, i, j):
tmp = lst[i]
lst[i] = lst[j]
lst[j] = tmp
(a) An incorrect program
(b) A correct program
Fig. 6: Function to swap two elements in list.
is more applicable in pedagogical scenarios, e.g. when a newly crafted assignment is given by an instructor.
Our work on using a reference correct solution may appear superficially similar to the recent paper [20]. However, [20] employs simultaneous symbolic analysis of both buggy and correct programs to produce provably correct repairs. Similar to other recent works on repair in education [3], [4], we do not give formal guarantees about the repairs generated by our approach. Instead we use refactoring and synthesis to efficiently represent/navigate the space of patches.
B. Feedback Generation
Automated program repair tools, originally designed to work on large codebases targeting experienced developers, have been used to provide feedback to students on introductory programming assignments with limited success [1]. Hence, new tools have been proposed in literature, targeting the novice programmers and their mistakes specifically. AutoGrader [5] proposes a program synthesis based approach which takes a reference solution and manually provided error model to generate repairs on incorrect programs. Refaizer [7] attempts to learn simple syntactic program transformations from historical edit examples, and applies AST rewrite rules on matching incorrect programs to automatically repair them.
Clara [3] and Sarfgen [4] are two recent approaches related to our work that generate complex patches on incorrect student programs automatically. Given an incorrect program attempt, Clara and Sarfgen rely on finding a correct solution having the same looping and control-flow structure, respectively. This assumption presents a serious challenge when there is lack of access to a diverse set of existing correct solutions, for example when a newly crafted assignment is given by instructor. To address this issue, our approach attempts to refactor one or more correct solutions to generate new semantically equivalent correct solutions, with different looping/control-flow structures. In addition, as noted in our experiments on running time, Clara suffers from a scalability problem due to the use of Integer Linear Programming. We are unable to compare our run-time/accuracy with Sarfgen since their implementation has not been publicly released; moreover Sarfgen is targeted towards C# while our tool works only for Python programs.
Consider the incorrect student attempt for swap function in Fig. 6a. Here, the student has made a mistake in swapping two elements of a list, \( \text{lst}[i] \) and \( \text{lst}[j] \), through use of an intermediate \( \text{tmp} \) variable. Given the correct program shown in Fig. 6b as input, our Refactory approach generates the minimal repair of modifying a single line #2 from \( \text{tmp} = \text{lst}[i] \) to \( \text{tmp} = \text{lst}[j] \), by replacing each line with holes and synthesizing expression candidates. While Clara generates a sub-optimal solution at the block level, by borrowing the two differing lines (3 and 4) from the correct program.
VIII. Threats to Validity
Our choice of refactoring rules are by no means exhaustive, and primarily targets conditionals, looping structures, and their combinations; which constitutes majority of the control flow mistakes made by students in introductory programming classes. While we do report experimental results on 6 additional assignments unseen during refactoring rule crafting, in future we plan to rigorously test our tool on larger variety of programs collated from other publicly available datasets.
Our implementation currently supports only structured programming control flow structures. In future, we plan to extend our approach to handle Object-Oriented Programming concepts. Additional complex features available in Python, such as list comprehensions or lambda functions, are currently not handled since novice students rarely utilize advanced concepts.
Correctness of repairs is verified against instructor-provided test-suite, a manually designed incomplete specification.
Finally we note that while our implementation is targeted towards Python programs, our approach based on refactoring and block repair is not restricted to Python programs.
IX. Discussion
The recent past has witnessed an explosion of works on automated feedback generation for introductory programming assignments, through means of program repair [1]–[7]. At a general level, most of the works search in the space of program edits to either generate feedback for students or to help automatically grade assignments. Due to a large variety of coding errors in programming assignments written by novice programmers, the search space of edits between a given incorrect program and a correct program tends to be huge [1]. Many past works have contributed immensely in the navigation of this search space of edits which may enable feedback generation for students. In our work, we have focused first on the search space representation, thereby prompting our refactoring phase, and then attempted to systematize the navigation of possible patches of a basic block by partitioning the candidate patches using test-equivalence analysis. Such a representation and navigation of the search space also allows us to work in various set-ups including those where many correct solutions are not available.
Our efforts are embodied in the form of Refactory, a customized Python repair system. We have employed the repair system extensively over a large data-set of more than a thousand programming assignments collected from hundreds of students enrolled in an introductory programming course.
In future work, we plan to conduct detailed user studies where the feedback from our tool can be generated live during tutorial or recitation sessions, so as to gauge the possible improvement in meeting learning outcomes.
ACKNOWLEDGMENTS
This work was supported in part by Office of Naval Research grant ONRG-NICOP-N62909-18-1-2052. This work was partially supported by the National Satellite of Excellence in Trustworthy Software Systems, funded by NRF Singapore under National Cybersecurity R&D (NCR) programme. We would like to thank Tegawendé F. Bissyandé and the anonymous reviewers of ASE for their valuable feedback.
REFERENCES
|
{"Source-Url": "https://www.comp.nus.edu.sg/~bleong/publications/ase19-repair.pdf", "len_cl100k_base": 12269, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 44658, "total-output-tokens": 14353, "length": "2e13", "weborganizer": {"__label__adult": 0.00043129920959472656, "__label__art_design": 0.0004353523254394531, "__label__crime_law": 0.00034356117248535156, "__label__education_jobs": 0.00972747802734375, "__label__entertainment": 9.173154830932616e-05, "__label__fashion_beauty": 0.0002073049545288086, "__label__finance_business": 0.00029206275939941406, "__label__food_dining": 0.0004439353942871094, "__label__games": 0.0008969306945800781, "__label__hardware": 0.0008139610290527344, "__label__health": 0.0004279613494873047, "__label__history": 0.00029087066650390625, "__label__home_hobbies": 0.00015664100646972656, "__label__industrial": 0.0004477500915527344, "__label__literature": 0.0004508495330810547, "__label__politics": 0.00029277801513671875, "__label__religion": 0.0005593299865722656, "__label__science_tech": 0.0088043212890625, "__label__social_life": 0.00017559528350830078, "__label__software": 0.00579833984375, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.00041365623474121094, "__label__transportation": 0.0006761550903320312, "__label__travel": 0.0002524852752685547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54682, 0.02948]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54682, 0.38921]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54682, 0.87084]], "google_gemma-3-12b-it_contains_pii": [[0, 4814, false], [4814, 10527, null], [10527, 16333, null], [16333, 18892, null], [18892, 21286, null], [21286, 27458, null], [27458, 34747, null], [34747, 41655, null], [41655, 44514, null], [44514, 50503, null], [50503, 54682, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4814, true], [4814, 10527, null], [10527, 16333, null], [16333, 18892, null], [18892, 21286, null], [21286, 27458, null], [27458, 34747, null], [34747, 41655, null], [41655, 44514, null], [44514, 50503, null], [50503, 54682, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54682, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54682, null]], "pdf_page_numbers": [[0, 4814, 1], [4814, 10527, 2], [10527, 16333, 3], [16333, 18892, 4], [18892, 21286, 5], [21286, 27458, 6], [27458, 34747, 7], [34747, 41655, 8], [41655, 44514, 9], [44514, 50503, 10], [50503, 54682, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54682, 0.02312]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
77e3fccd121aaa526879b5f6dbc03b223334ae5a
|
Studying Co-evolution of Production & Test Code Using Association Rule Mining
Zeeger Lubsen, Andy Zaidman, Martin Pinzger
Report TUD-SERG-2009-014
Studying Co-evolution of Production & Test Code Using Association Rule Mining
Zeeger Lubsen
Software Improvement Group
The Netherlands
z.lubsen@sig.nl
Andy Zaidman, Martin Pinzger
Delft University of Technology
The Netherlands
{a.e.zaidman, m.pinzger}@tudelft.nl
Abstract
Unit tests are generally acknowledged as an important aid to produce high quality code, as they provide quick feedback to developers on the correctness of their code. In order to achieve high quality, well-maintained tests are needed. Ideally, tests co-evolve with the production code to test changes as soon as possible. In this paper, we explore an approach to determine whether production and test code co-evolve synchronously. Our approach is based on applying association rule mining to the change history of product and test code classes. Based on these co-evolution rules, we introduce a number of measures to assess the co-evolution of product and test code classes. Through two case studies, one with an open source and another one with an industrial software system, we show that association rule mining and our set of measures allows one to assess the co-evolution of product and test code in a software project and, moreover, to uncover the distribution of programmer effort over pure coding, pure testing, or a more test-driven-like practice.
1 Introduction
The development of high quality software systems is a complex process; maintaining an existing system is often no less challenging, an insight which Lehman formulated in his Laws of Software Evolution [11]. Runeson on the other hand notes that automated unit testing can be an effective countermeasure for difficulties encountered during software maintenance [14]. Also Test-Driven Development (TDD) [3] and test-driven refactoring [13] can play an important role here.
The quality of the tests — and by consequence the added value for maintenance activities — greatly depends on the effort that the developers put into writing and maintaining tests. Typically, the quality of a test suite is expressed by code coverage: the percentage of the code that is exercised by the test suite that is executed [4]. Code coverage, however, is a shallow measure of test quality as it expresses that code is executed, but not how (well) something is tested. In this context, one should think of (1) different input values — boundary values — and (2) the number of assertions [4, 19]. Furthermore, code coverage does not provide a good indicator for the long term quality or “test health” of a test suite. As such, we have no insight into (1) how well test code was adapted to previous changes in the production code, (2) the current structure of the test code, and (3) how easy it will be to perform maintenance on both the production and the test code in the future.
This missing insight has motivated us to investigate the co-evolution of production and test code. In our previous work, we introduced the Change History View [19] to observe and perform a qualitative analysis of the co-evolution of production of test code and, moreover, to uncover the distribution of programmer effort over pure coding, pure testing, or a more test-driven-like practice.
RQ1: Can association rule mining be used to find evidence of co-evolution of production and test code?
RQ2: Following RQ1, can we find measures to assess the extent to which product and test code co-evolves?
RQ3: Can different patterns of co-evolution be observed in distinct settings, for example, open source versus industrial software systems?
We address these research questions by means of two case studies. The first case study is on Checkstyle, an open source system that checks whether code adheres to a coding standard. The second case study is on an industrial software
system from the Software Improvement Group (SIG).²
The structure of this paper is as follows: in Section 2
we introduce association rule mining and explain our spe-
cific approach. Sections 3.1 and 3.2 deal with our two case
studies, respectively Checkstyle and the industrial system
provided by SIG. Section 4 deals with threats to validity.
Section 5 relates our work to other work in the field and we
present our conclusions and future work in Section 6.
2 Production and test class co-evolution
The application of data mining techniques in software
engineering research has become popular [17]. This can
partly be explained by the fact that software engineers are
looking at studying large sets of data for which efficient
analysis methods are required. Within the realm of data
mining, we have chosen to use association rule mining, be-
cause this technique allows us to identify instances of log-
cal coupling between classes [20], in particular between
production and test classes. For this paper, production
code/classes refer to Java classes and test code/classes to
JUnit test classes.
The basic idea of our approach is to use association rule
mining to study the co-evolution of test and production
code. The change history of test and production classes,
in particular commit transactions, form the input to our
approach. Information about commit transactions are ob-
tained from versioning repositories, such as, the concurrent
versions systems (CVS) or Subversion (SVN). In the fol-
lowing, we provide background information of association
rule mining and the set of metrics that we use to study co-
evolution of production and test classes.
2.1 Association rule mining
Formally, an association rule is a statistical description of
the co-occurrence of elements in the change history that
constitute the rule in the change history. Agrawal et al. de-
fine it as [1]:
Definition 1 Given a set of items I = I₁, I₂, ..., Iₘ and
a database of transactions D = t₁, t₂, ..., tₙ where tᵢ =
Iᵢ₁, Iᵢ₂, ..., Iᵢₖ and Iᵢₖ ∈ I, an association rule is an
implication of the form A ⇒ B where A, B ⊂ I are sets of items
called itemsets and A ∩ B = ∅.
The left-hand side of the implication is called the an-
tecedent, and the right-hand side is called the consequent
of the rule. An association rule expresses that the occur-
rence of A in a transaction statistically implies the presence
of B in the same transaction with some probability. It is
important to note that an association rule does not express
a causal relation, but rather a spurious one, as the rule does
not describe a proven cause-effect relation.
In our approach, we consider association rules that
express a binary relation between classes, as we are
looking for relations between individual production
classes (PC) and test classes (TC). For example, consider
the SVN transaction \{TC₁, PC₁, PC₂\} com-
mittng changes to the test class TC₁, and the two produc-
tion classes PC₁ and PC₂. Computing all pairs we get the following binary association rules: \{TC₁ → PC₁\}, \{PC₁ → TC₁\}, \{PC₂ → TC₁\}, \{TC₁ → PC₂\}, \{PC₁ → PC₂\}, \{PC₂ → PC₁\},
Formally, for a transaction involving n classes we obtain
n \times (n − 1) binary association rules. We take into account
inverse association rules, because the inverse rules can have
a different probability, as we explain below.
2.2 Co-evolution rules
In order to analyze the testing practices for an entire sys-
tem, we need a high-level overview of the development and
testing activities of the software system. For that, we clas-
sify binary association rules according to rules that deal (1)
solely with production code, (2) solely with test code, and
(3) that deal with both production and test code. Table 1
shows this classification in detail.
<table>
<thead>
<tr>
<th>Class</th>
<th>Association rule</th>
</tr>
</thead>
<tbody>
<tr>
<td>TOTAL</td>
<td>The collection of all found association rules.</td>
</tr>
<tr>
<td>PROD</td>
<td>{ProductionClass ⇒ ProductionClass}</td>
</tr>
<tr>
<td>TEST</td>
<td>{TestClass ⇒ TestClass}</td>
</tr>
<tr>
<td>PT</td>
<td>Rules that only associate test classes.</td>
</tr>
</tbody>
</table>
| P2T | Rules that only associate production-test pairs, which we can
| subdivide into: |
| T2P | \{ProductionClass ⇒ TestClass\}. These rules express
| that a change in production class implies a change in
| test class with some probability. |
| MP2T | Matching production to test rules; P2T rules where the
| antecedent and the consequent can be matched to be
| long together as unit test and class-under-test. These
| rules express that a change in production code implies
| a change in test code with some probability. |
| mT2P | The counterpart of MP2T. |
Table 1. Classification of association rules.
While PT comprises association rules between product
and test code the sub-classes refine this set by taking the di-
rection of rules into account. The direction of rules comes
into play when calculating the interestingness of an as-
sociation rule. Furthermore, we introduce two categories
containing rules that denote commit transactions in which
a test class has been matched to a production class. For
2.3 Co-evolution metrics
Typically, association rule mining is used to search for rules that are “interesting” or “surprising”. In our case, we seek to find a global view on the entire change history of source files (i.e., top-level Java classes) of a software project. As such, we are mainly interested in the total number of rules that associate production and test classes and how “interesting”, i.e., how strong the statistical certainty of these rules is. In the following we explore a number of standard rule significance and interest measurements to measure co-evolution between production and test classes in a software system.
The metrics presented in Table 2 allow us to reason about the significance and interest of single association rules. To get an overall understanding of how production and test code co-evolves in a software system we use straightforward descriptive statistics with boxplots. Boxplots provide a five-number summary of the distribution of significance and interest metric values. The sample minimum and maximum define the range of the values, while the median designates the central tendency of the distribution. The lower and upper quartile allow reasoning about the standard deviation and together with the median about the skewness of metric values.
These metric-values help us in interpreting the interest-ingness of the association rule classes that we have defined in Section 2.2. If a rule appears in almost all commits, its support is close to 100%. While this is unlikely to happen for all commits, finding outliers that exhibit a support close to 100% is interesting, e.g., as they indicate a possible bad design choice if two classes have been changed together that often. The confidence-metric is tightly related to the concept of co-evolution. It represents the certainty with which one can expect, for example, when the product class is changed that also the test class is changed. Confidence values higher than 0.5 give a clear indication of co-evolution between classes. The interest becomes higher when the rule frequently holds. As for conviction, high-quality rules (those that hold 100% of the time) have a value of $\infty$, while the less interesting rules have a value that approaches 1 (rules from completely unrelated items have a metric-value of 1) [5].
Co-evolution of production and test classes is indicated by rules in PT and its subclasses with significant support, high confidence, interest, and conviction. Separate evolution of product and test classes is indicated by rules in PROD and TEST with significant support, high confidence, interest, and conviction. If the majority of PROD, TEST, and PT rules has low support, we conclude that there is no structural co-evolution between classes.
In addition to the association rule interest measures, we introduce several metrics to measure the extent to which product classes are covered by test classes. The set of metrics is described in Table 3.
<table>
<thead>
<tr>
<th>Metric</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>PCC</td>
<td>Production class coverage. The average number of test classes that are changed per changed production code class. This number is calculated by $\frac{\text{#testclasses}}{\text{#products} \times \text{#PT}}$.</td>
</tr>
<tr>
<td>MPCC</td>
<td>Matching production class coverage. The percentage of production classes that co-evolve with their matched unit test class. This number is calculated by $\frac{\text{#productionclasses}}{\text{#PT}}$.</td>
</tr>
<tr>
<td>TCC</td>
<td>Test class coverage. The average number of production code classes that are changed per changed unit test class. This number is calculated by $\frac{\text{#productionclasses}}{\text{#testclasses}}$.</td>
</tr>
<tr>
<td>MTCC</td>
<td>Matching test class coverage. The percentage of test classes that co-evolve with its matched production class-under-test. This number is calculated by $\frac{\text{#testclasses}}{\text{#products} \times \text{#PT}}$.</td>
</tr>
</tbody>
</table>
Table 2. Metrics for individual association rules.
Table 3. Product-test class coverage metrics.
These coverage metrics allow us to get an insight into the testing strategy. More precisely, a high ratio of PCC
and TCC indicates that many production class and test class pairs are changed together. On the other hand, high ratios of mPCC and mTCC indicate that the co-change is structural.
3 Experiments
The main goal of our experiments is to evaluate the applicability of proposed co-evolution metrics to answer the research question stated in the Section 1. For the evaluation, we performed two case-studies, one with the open source system Checkstyle and another one with the industrial software analysis tool from the Software Improvement Group (SIG). For each system we compute the association rule classes and the set of co-evolution metrics. We evaluate and validate our metrics by comparing it with the results obtained by our previous experiments in which we used the Change History View technique and the feedback from developers to reason about co-evolution of production and test classes [19]. In short, the Change History View depicts the evolution of the production code and the test code throughout time (for example see, Figure 1). The X-axis represents time and the Y-axis shows the Java classes. Furthermore, we make a distinction between the creation/change of production code (red/blue dot) and the creation/change of test code (green/yellow dot). A unit test that can be associated to a production code class through naming conventions is placed on the same horizontal line. The usefulness of the Change History View has been demonstrated and validated in previous research [16, 19]. Together with the feedback from the developers (in the case of the SIG case study) it provides the basis for the evaluation and validation of our co-evolution metrics.
In the following, we first present the results from the Change History View analysis that then are compared and discussed with the co-evolution metrics. A summary of the results is given at the end of this section.
3.1 Case study 1: Checkstyle
Checkstyle is an open source coding standard checker for Java source code. Between June 2001 and March 2007, 2259 commits resulted in a total of 1160 Java classes, of which 797 refer to product code, and 363 are identified as a test class.
Change History View Figure 1 depicts the Change History View computed from Checkstyle’s change log data. The view shows that initially little testing has been performed. After that, the system started to grow and tests have been added along with new production code. Around revisions 690 and 780, two phases of pure test effort can be distinguished, and after revision 850 tests for most classes existing at that point in time have been added. After these additions, we observe a significant period of pure coding with hardly any maintenance to the tests being performed. The view highlights few recurring test phases around revisions 1380 and 2100. For the larger part of the history, tests appear to receive only minor attention from developers, as only few additions and changes to production code are accompanied or closely followed by the addition or change in a related test classes. An exception to this behavior can be witnessed between commits 1350 and 1600, where for a small period of time new production code classes are accompanied by new unit tests. More striking are regular commits comprising a large number of files as indicated by blue vertical bars. Most of these commits were due to code cleanups or copyright notice changes.
Figure 1. Change History View of Checkstyle.
Co-evolution rule mining The results of the classification of the association rules obtained from the 2259 commit transactions of Checkstyle are depicted in Table 4.
<table>
<thead>
<tr>
<th>ALL(N)</th>
<th>58566</th>
<th>P2T</th>
<th>0.33%</th>
</tr>
</thead>
<tbody>
<tr>
<td>PROD</td>
<td>98.86%</td>
<td>T2P</td>
<td>0.33%</td>
</tr>
<tr>
<td>TEST</td>
<td>0.48%</td>
<td>mP2T</td>
<td>0.09%</td>
</tr>
<tr>
<td>PT</td>
<td>0.67%</td>
<td>mT2P</td>
<td>0.09%</td>
</tr>
</tbody>
</table>
Table 4. Rule ratios for Checkstyle.
The ratio of PROD rules shows that 98.86% of the 58566 rules express an association between two production classes. We can explain this through the fact that the developers initially hardly used unit tests, even though they adopted a more test-driven development strategy over time, e.g., between commits 1350 and 1600. The first period of development thus practically only involved production code, but the several phases of pure testing effort that were observed in the Change History View (the vertical green and
yellow lines in Figure 1) should have created a fair amount of TEST rules. Closer inspection however, reveals that the testing phases that we have identified involve commits with only a few tests per commit, while many other commits contain a large amount of production classes. As the change history of Checkstyle contains several recurring very large commits, there are many rules being generated from those commits.
Because of the large commits we expect many PROD rules to have a low interest and strength. The boxplots in Figure 2 show that over all association rule classes the support of rules is low, with the PROD rules having several outliers (shown as crosses). This indicates that most of the possible production and test class combinations occurred only in few commit transections.
The ratios of TEST and PT (sub-)classes are low (see Table 4), even though the Checkstyle developers appear to have adopted a decent testing practice over time; we identified a phased testing approach in the first half of the change history (green and yellow vertical bars in the Change History View), but we also saw a more test-driven approach in the latter part (red dots being covered by green dots, e.g., between commits 1350 and 1600).
Looking at the interest values, the correlation among matching production and test classes (mP2T, mT2P and mPT) is stronger than for more unrelated classes. The correlation among TEST rules is even stronger. This observation also holds for the confidence and conviction distributions, e.g., the confidence of mT2P rules shows that 75% of those rules express a conditional probability of over 50%. Note that that number alone is not enough to conclude synchronous co-evolution between production and test classes, as we do not yet know how many tests are actively maintained.
The boxplots show significantly lower values of mP2T rules for confidence and conviction. This is because (1) mP2T and mT2P rules are not symmetric for confidence and conviction, and (2) the often changing nature of production code makes the presence of a production class in a commit so trivial that no interesting statement can be made based on its presence. The values for interest of mP2T and mT2P rules are identical, because of the symmetry of the interest metric.
In contrast to confidence and conviction, the interest-values for matching production and test classes mT2P are not evidently higher than for mP2T using the interest metric. This is because highly correlated (m)T2P rules are averaged out against the lowly correlated (m)P2T rules. From these results we can make the two following observations:
**Observation 1** High (median) values for TEST and relatively low (support) values for (m)PT rules originates from the co-change of test classes and indicate that testing is performed as a separate activity.
**Observation 2** Interest averages the measurements for matching rules in different directions. This causes the differences to even out, and makes interest a less specific metric.
Summarizing, we can see that most of the co-changed Checkstyle classes belong to the production code. These co-changes are mostly unintentional and caused by code cleanup activities, e.g., running Checkstyle on the Checkstyle source code. Looking at the statistics, we mainly see a large class of PROD rules, originating from some very large commits. These commits perturb our analysis somewhat. Still, going by the PT and TEST rule classes, we see some evidence both for a phased and a test-driven approach to testing. The Change History View confirms this, as there are periods where commits consists mainly out of unit tests, while there are also periods in time (e.g., between commits 1350 and 1600), where we see test-driven development taking place. To see these phases in more detail, we aim to investigate a sliding-window-based approach of our analysis to investigate the change history of a software system in more detail.
Co-evolution coverage in Checkstyle
We computed the number of co-evolution coverage measures introduced before to quantify the co-evolution of product and test code of Checkstyle. The resulting coverage measures are listed in Table 5.
<table>
<thead>
<tr>
<th>Coverage metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Production class coverage (PCC)</td>
<td>0.42</td>
</tr>
<tr>
<td>Matching production class coverage (mPCC)</td>
<td>0.11</td>
</tr>
<tr>
<td>Test class coverage (TCC)</td>
<td>0.38</td>
</tr>
<tr>
<td>Matching test class coverage (mTCC)</td>
<td>0.09</td>
</tr>
</tbody>
</table>
Table 5. Co-evolution coverage metrics for Checkstyle.
For Checkstyle, we see a low value for both PCC and TCC, indicating that for each production class that is changed (on average) only 0.42 test classes are changed (PCC). The other way around, we see that for each test class that is changed, 0.38 production classes are changed. This indicates that co-change does not happen very frequently. If we zoom in a little bit more and look at how structural the co-changes are applied, we see that for 0.11 of the production code classes, the test counterpart that matches based on naming conventions is (potentially) also changed. Vice versa, for 0.09 of the test classes, the matching production class is also (potentially) changed.
These figures should be considered low and as such do not provide any indication that co-evolution of production and test code takes place in the case of Checkstyle.
Discussion
For Checkstyle we saw that actual software development and testing are mainly two separate activities, which is mainly evidenced through the rule ratios that we saw in Table 4. However, a possible complication that we came across when interpreting the results was the fact that there are some large commits of (mainly) production code, which dominate the rule ratios to a large extent, thereby perturbing the interpretation. These very large commits originate from automated code beautification operations (using Checkstyle). As such, a possible avenue for further research is to eliminate these large commits and see how this influences the results.
During our interpretation, we also observed large differences between mT2P and mP2T rules when studying the confidence and conviction rules. In particular, we saw that the statistical evidence for mT2P rules was stronger than for mP2T rules. Closer inspection revealed this to be due to commits containing a larger number of production code classes than test code classes, thereby influencing the probabilities behind confidence and conviction.
Considering the average number of production and test classes that are changed together, we can say that in general not many production and test classes are co-evolved as evidenced by the very low PCC and TCC values. This is further underlined by the low mPCC and mTCC values.
3.2 Case study 2: Software Improvement Group
The industrial case study that we performed pertains to a software project from the Software Improvement Group (SIG). The SIG is a tool-based consultancy firm that is specialized in the area of quality improvement, complexity reduction and software renovation. The SIG performs static source code analysis to analyze software portfolios and to derive hard facts from software to assess the quality and complexity of a system.
For our study we investigate the development history of one of the SIG tools between April 2004 and January 2008. Over time 20 developers worked on this software project, which after about 2200 commits resulted in around 4000 classes.
Change History View
The Change History View for the industrial case is shown in Figure 3 (also see [16] for more details). From the view we see that the software project shows a steady growth curve and we also observe that code and test writing efforts are overlapping for pretty much the entire change history. Red and blue dots, indicating respectively the addition and change of production classes, are frequently followed by green or yellow dots, indicating the addition and change of unit tests respectively. We investigated the code changes and log messages behind larger commits. We found out that most of these changes correspond to refactorings involving also the test classes, and code cleanups, which did not always involve the test classes.
Co-evolution rule mining
The classification of the association rules obtained from the 2200 commit transaction of...
the SIG tool resulted in the following ratios listed in Table 6.
<table>
<thead>
<tr>
<th></th>
<th>ALL(N)</th>
<th>P2T</th>
<th>19.37%</th>
</tr>
</thead>
<tbody>
<tr>
<td>PROD</td>
<td>35.15%</td>
<td>T2P</td>
<td>19.37%</td>
</tr>
<tr>
<td>TEST</td>
<td>26.11%</td>
<td>mP2T</td>
<td>0.78%</td>
</tr>
<tr>
<td>PT</td>
<td>38.75%</td>
<td>mT2P</td>
<td>0.78%</td>
</tr>
</tbody>
</table>
**Table 6. Rule ratios for the SIG tool.**
Compared to Checkstyle, the rule classification of the SIG software system results in more evenly partitioned rule classes. In particular, the ratios of PROD (35.15%) and PT (38.75%) are very close to each other. Furthermore, there is a high ratio of TEST rules (26.11%). The high ratio of PT is in line with the observations from the Change History View (Figure 3), where we observe a strong synchronous co-evolution of production and test code for the SIG software system. Not only are the rule class ratios evenly partitioned over pure coding and test-driven development, also the support distribution presents a uniform picture (see Figure 3): PROD, TEST and PT rules show similar measurements, and so do the matching classes mPT, mP2T, and mT2P. The distributions are more uniform (resembling the normal distribution), and show less skewness than for the Checkstyle case.
Of interest to note is the surprisingly large set of TEST rules, which we attribute to commits that contain multiple pairs of production and test code. Such a big set of TEST rules can occur when some combinations of test classes occur often in the history. This can be the result of development cycles including a significant amount of testing.
Continuing on the fact that the number of TEST rules is high, we also see that the support for association rules of this rule class is low. The high confidence and conviction values for the rules must be the result from not many, but from structural co-occurrences. That is, specific combinations of test classes frequently occur together in commits, but these test classes do not occur frequently in other combinations. This indicates that developers focus on writing tests for specific parts of the system. Talking to the developers of the SIG we learned that the software system is actually a collection of analysis tools that grows and changes over time. Developers are assigned to different customers, so their work on the tools is cross-cutting throughout the entire system; this causes more combinations of classes to occur and brings down the correlation between classes, and thus the support for PROD rules. Following the same reasoning, we expect tests to focus on specific parts of the code, as the correlation among tests is high, i.e., high confidence and interest. We shared our findings with the SIG developers who confirmed our insights. The results led us to the following observation:
**Observation 3** High confidence and interest of only production classes (or only test classes) indicate that programmers focus on specific parts of the system (or the test suite).
**Co-evolution coverage in SIG** Table 7 lists the values obtained for the co-evolution rule coverage metrics.
<table>
<thead>
<tr>
<th>Coverage metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Production class coverage (PCC)</td>
<td>7.96</td>
</tr>
<tr>
<td>Matching production class coverage (mPCC)</td>
<td>0.32</td>
</tr>
<tr>
<td>Test class coverage (TCC)</td>
<td>11.79</td>
</tr>
<tr>
<td>Matching test class coverage (mTCC)</td>
<td>0.48</td>
</tr>
</tbody>
</table>
**Table 7. Co-evolution coverage metrics for SIG.**
We see that for every production class that is being changed, there are also on average 7.96 test classes being changed. The other way around, we see that on average 11.79 production classes are being changed for each test class that is changed. For a more structural view, we look at the mPCC and mTCC values, which indicate how many association rules linking matching production and test code exist, we find that for 32.22% of the production code classes, the associated test class was changed together at least once. Vice versa, 47.70% of the test classes were changed together with their production-code-counterpart at least once.
These coverage metrics indicate that the SIG software development process does indeed follow a more test-driven development strategy, because we have indications that many of the test/production class pairs co-evolve.
Discussion In our industrial case study we observed that the SIG developers are following a development and testing strategy that resembles that of a test-driven development strategy. The first indication is given by the fact that the rule class ratios are fairly evenly distributed over PROD, TEST and PT. Another important indicator for test-driven development are the rule coverage ratios for the SIG software system. Here we saw that for each production class that has been changed, also a significant number of test classes has been changed (and vice versa). This phenomenon is also structural, as also matched production and test class pairs have been changed together.
3.3 Answers to research questions
Based on the results obtained from the two cases studies we can provide the answers to the research questions stated in the introduction of the paper.
RQ1 Can association rule mining be used to find evidence of co-evolution of production and test code? The results of our two case studies clearly showed that association rule mining is an adequate technique to investigate the co-evolution in software systems. For the SIG case study we found evidence of co-evolution by looking at the PT rule class, which contained 38.75% of all rules, indicating many co-changes of production and test classes. Furthermore, high support and confidence values for the PT class (and its subclasses) provide further evidence for this co-evolution. For the Checkstyle case study, we did not get a clear indication of intentional co-evolution. This can be attributed to two factors, namely: (1) the co-evolution is only taking place during short periods of time, while our technique is mainly aimed towards providing an overview of longer periods of time, and (2) due to a number of large commits of superficial changes to the production code. This led to an explosion of the number of PROD rules biasing the results.
RQ2 Can we find measures to assess the extent to which product and test code co-evolve? Through our case studies we found out that the extent of co-evolution can be measured by the PCC, TCC, and respectively the mPCC and mTCC metrics in combination with the confidence of association rules. In the case of SIG high values for these metrics clearly indicated co-evolution of product and test classes. This result is validated by the Change History View (see Figure 3) and the SIG developers. In case of Checkstyle the values for these metrics are significantly lower indicating little co-evolution. This finding is underlined by the Change History View depicted by Figure 1.
RQ3 Can different patterns of co-evolution be observed in distinct settings, for example, open source versus industrial software systems? Our two case studies, of which one was an open source and one was an industrial software system, have shown two different development practices, which affirm a 'yes' to this question. Our metrics indicate test-driven development in the SIG software system while this is not the case for the Checkstyle. Note, that our findings have been validated for these two systems, but, must not be generalized for other open source and industrial software systems.
4 Threats to validity
We have identified a number of threats to validity, which we have classified into threats towards the (i) internal validity, (ii) external validity and (iii) construct validity.
Internal validity The case studies are subjective in the sense that they were performed by the developers of the tools. As a countermeasure we involve external sources of information in our evaluation. More specifically, we used (i) log messages from the developers to confirm or reject our observations — in particular for the observations from the Change History View [16, 19] — and (ii) we used the insights that we have previously obtained when researching the same software projects. These insights were confirmed by the developers of the software projects [16].
Our tool-chain might contain faults which explain the results of the case studies. As a countermeasure, we thoroughly tested our tool-chain.
External validity While we have chosen two case studies that are very different from each other — in terms of problem domain, in terms of closed/open source development, etc.—, they might not be representative. For example, during our case studies we have observed a test-driven-like development process, but at this point we are not sure whether our approach is also capable of detecting other development processes. We are currently planning other case studies in order to widen the scope further.
We use a simple heuristic that matches the class-name of the unit of production code to the class-name of the unit test, e.g., we matched String.java to StringTest.java. Our approach is purely based upon naming conventions and might not be generalizable, yet our 2 case studies adhered to it. This convention is also promoted in literature and tutorials [8, 6]. In order to analyze
case studies that do not follow such a naming convention,
a call graph based approach that associates test cases with
production classes can be used.
Construct validity For the evaluation we use the version-
ing system’s log messages to confirm or reject our observa-
tions (also see [16, 19]). As no strict conventions are in
place for what should be specified in such messages, there
are large differences in the content and quality of log mes-
ages across projects, tasks and developers. The external
evaluation, i.e., checking our conclusions with the original
developers, complements the internal evaluation as an addi-
tional source of validation.
We also identify two variation factors of the develop-
ment process with regard to the use of the version control
system. Firstly, the individual commit style — short cy-
cles, one commit per day, ... — of developers can influence
the results. A countermeasure in this area is using inter-
transactional association rule mining [15], which we see
as future research. Secondly, developers can use branch-
ing and as we are only studying the main branch, this might
interfere with our results. In the case of Checkstyle and
the SIG case, however, branching is not a common practice.
If a large part of a project’s development effort happens in
branches, it can be useful to specifically apply the approach
to these branches.
Finally, a remark on the limitations of studying the test-
ing process by analyzing the contents of a version control
system. The focus of our approach is on testing activities
that are performed by the developers themselves, i.e., unit
testing and integration testing, as these tests are typically
codified and stored alongside the production code. We ac-
knowledge that the testing process is much more than only
unit and integration testing, e.g., acceptance testing, yet, as
these acceptance tests are typically not stored in the version
control system, we have no means of involving these tests
in our approach.
5 Related work
The idea to analyze the change history of software sys-
tems was first coined by Ball et al. in 1997 [2]. In this
section we will give an overview of some of the advances in
this area that are particularly close to our own research.
Fluri et al. investigate whether code comments are up-
dated when production code changes [7]. They use code
metrics and charts to study these changes. A major differ-
ence between our own approach and Fluri’s approach is that
they analyze the changes at the code level, while we remain
at the file level.
Both Hindle et al. [10] and German [9] look into multi-
ple dimensions of co-evolution of software artifacts. Hin-
dle et al. study whether release patterns can be detected in
software projects. That is, behavioral patterns in the revi-
sion frequency of four different artifacts: source code, test
code, build files and documentation. They observe repeat-
ing patterns around releases for distinct systems, but the
data shows large differences between the systems. Ger-
man meanwhile combines information from many different
sources, like mailing lists, version control logs, web sites,
software releases, documentation and source code, the so-
called software trails [9]. He correlates these trails to each
other in order to recover information such as: the growth of
the software system, the interaction between the contribu-
tors, the frequency and size of contributions, and important
milestones in the development.
We found two uses of association rule mining in litera-
ture. Zimmermann et al. [20] attempt to guide the work
of developers based on dependencies found in the change
history. For each change a developer makes, his support
tool guides the programmer along related changes in order
to suggest and predict likely changes, prevent errors due to
incomplete changes and identify couplings that are unde-
tectable by program analysis. Their approach derives asso-
ciation rules in real time while the programmer is writing
code. As such, their approach does not build a descrip-
tive model of the data, but rather a predictive model. Xing
and Stroulia use association rule mining to detect class co-
evolution [18]. They apply the mining at the class level,
and are able to detect several class co-evolution instances.
They also intend to give advice to developers on what ac-
tion to take for modification requests, based on experiences
learned from past evolution activities.
6 Conclusion and future work
In this paper we have used association rule mining to
study the co-evolution between production code and test
code. In this context, we make the following contributions:
• An approach using association rule mining to study the
co-evolution of production and test code in a system.
Co-evolution rules are computed from commit transac-
tions obtained from version control data of production
and test classes.
• A set of co-evolution metrics including standard in-
terest and strength association rule mining metrics to
asses the extent to which product and test classes
evolve.
• An evaluation with two case studies, one performed
with the open source software project Checkstyle, and
another one performed with an industrial software sys-
tem provided by the Software Improvement Group. In
both case studies, the findings have been evaluated and
validated with the findings of our previous research
and the original developers/maintainers of the software
systems under study.
The two case studies that we performed have shown a greatly differing testing approach. In the case of Checkstyle, we saw a very mixed picture at first, since we observed that most of the commits are dominated by changes to production code. This is (1) due to the development style, where testing is mainly done in phases outside of regular development (this is true during the early development of Checkstyle), but also (2) due to a small number of large commits of production code that perturbs the rule classification (these large commits are due to code beautification). Our industrial case study, on the other hand, has shown a test-driven development style to testing, evidenced by a large number of commits that contained both additions/changes to production and test code.
The analysis techniques that we have explored in this work prove to be useful for (retrospective) assessment of the unit test suite. A weak point of our approach, however, is the fact that changes to the testing practices over small periods of time will not yield noticeable differences in the results, as our technique summarizes the entire history.
Future work. We have identified a number of ideas to build upon this research.
- The use of an inter-transactional association rule mining algorithm, which allows to widen our analysis from a single commit to a window of commits that were made in a short amount of time [15].
- The automatic identification and removal of very large commits that are often the result of an automated code-beautification operation. Removing these commits will sharpen the results from our analysis.
- Traversing the change history with a sliding window, so that time-intervals can be studied more in depth, details become more clear and trends can be identified.
Acknowledgments. We would like to thank the Software Improvement Group for their support during this research and Bas Corneliussen and Bart Van Rompaey for proofreading this paper. Funding for this research came from the NWO Jacquard Reconstructor project and from the Centre for Dependable ICT Systems (CeDICT).
References
|
{"Source-Url": "http://swerl.tudelft.nl/twiki/pub/Main/TechnicalReports/TUD-SERG-2009-014.pdf", "len_cl100k_base": 9396, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 37790, "total-output-tokens": 11081, "length": "2e13", "weborganizer": {"__label__adult": 0.00034308433532714844, "__label__art_design": 0.0001995563507080078, "__label__crime_law": 0.0003266334533691406, "__label__education_jobs": 0.0006918907165527344, "__label__entertainment": 3.743171691894531e-05, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.000164031982421875, "__label__food_dining": 0.0002739429473876953, "__label__games": 0.0005011558532714844, "__label__hardware": 0.0005869865417480469, "__label__health": 0.0003561973571777344, "__label__history": 0.00015056133270263672, "__label__home_hobbies": 6.16908073425293e-05, "__label__industrial": 0.0002627372741699219, "__label__literature": 0.00017750263214111328, "__label__politics": 0.00020205974578857425, "__label__religion": 0.0003674030303955078, "__label__science_tech": 0.003551483154296875, "__label__social_life": 7.027387619018555e-05, "__label__software": 0.003894805908203125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00025582313537597656, "__label__transportation": 0.00032901763916015625, "__label__travel": 0.00016307830810546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46501, 0.01909]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46501, 0.49946]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46501, 0.92498]], "google_gemma-3-12b-it_contains_pii": [[0, 149, false], [149, 149, null], [149, 3928, null], [3928, 9101, null], [9101, 13221, null], [13221, 17563, null], [17563, 21521, null], [21521, 25972, null], [25972, 30009, null], [30009, 35205, null], [35205, 40722, null], [40722, 46501, null], [46501, 46501, null], [46501, 46501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 149, true], [149, 149, null], [149, 3928, null], [3928, 9101, null], [9101, 13221, null], [13221, 17563, null], [17563, 21521, null], [21521, 25972, null], [25972, 30009, null], [30009, 35205, null], [35205, 40722, null], [40722, 46501, null], [46501, 46501, null], [46501, 46501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46501, null]], "pdf_page_numbers": [[0, 149, 1], [149, 149, 2], [149, 3928, 3], [3928, 9101, 4], [9101, 13221, 5], [13221, 17563, 6], [17563, 21521, 7], [21521, 25972, 8], [25972, 30009, 9], [30009, 35205, 10], [35205, 40722, 11], [40722, 46501, 12], [46501, 46501, 13], [46501, 46501, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46501, 0.11209]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
4fa975a17c656fa0721c370f840053a3bc3f47ce
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01723606/file/rvbook-chapter-error-prevention-and-reaction.pdf", "len_cl100k_base": 13675, "olmocr-version": "0.1.50", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 73550, "total-output-tokens": 22568, "length": "2e13", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.000335693359375, "__label__crime_law": 0.0004737377166748047, "__label__education_jobs": 0.0007619857788085938, "__label__entertainment": 7.402896881103516e-05, "__label__fashion_beauty": 0.00016129016876220703, "__label__finance_business": 0.0001901388168334961, "__label__food_dining": 0.00026798248291015625, "__label__games": 0.0007824897766113281, "__label__hardware": 0.0007843971252441406, "__label__health": 0.00039005279541015625, "__label__history": 0.0002524852752685547, "__label__home_hobbies": 7.748603820800781e-05, "__label__industrial": 0.0003101825714111328, "__label__literature": 0.00036263465881347656, "__label__politics": 0.00030231475830078125, "__label__religion": 0.0003764629364013672, "__label__science_tech": 0.030059814453125, "__label__social_life": 7.575750350952148e-05, "__label__software": 0.00923919677734375, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.00020885467529296875, "__label__transportation": 0.0004589557647705078, "__label__travel": 0.00016546249389648438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 93920, 0.03829]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 93920, 0.35839]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 93920, 0.87438]], "google_gemma-3-12b-it_contains_pii": [[0, 1014, false], [1014, 3876, null], [3876, 7226, null], [7226, 9045, null], [9045, 11907, null], [11907, 12938, null], [12938, 16515, null], [16515, 19894, null], [19894, 23549, null], [23549, 26671, null], [26671, 30285, null], [30285, 33822, null], [33822, 37124, null], [37124, 40568, null], [40568, 42235, null], [42235, 45803, null], [45803, 49367, null], [49367, 52752, null], [52752, 54272, null], [54272, 57806, null], [57806, 59450, null], [59450, 62975, null], [62975, 66430, null], [66430, 69656, null], [69656, 72447, null], [72447, 76153, null], [76153, 79808, null], [79808, 83498, null], [83498, 87252, null], [87252, 91164, null], [91164, 93920, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1014, true], [1014, 3876, null], [3876, 7226, null], [7226, 9045, null], [9045, 11907, null], [11907, 12938, null], [12938, 16515, null], [16515, 19894, null], [19894, 23549, null], [23549, 26671, null], [26671, 30285, null], [30285, 33822, null], [33822, 37124, null], [37124, 40568, null], [40568, 42235, null], [42235, 45803, null], [45803, 49367, null], [49367, 52752, null], [52752, 54272, null], [54272, 57806, null], [57806, 59450, null], [59450, 62975, null], [62975, 66430, null], [66430, 69656, null], [69656, 72447, null], [72447, 76153, null], [76153, 79808, null], [79808, 83498, null], [83498, 87252, null], [87252, 91164, null], [91164, 93920, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 93920, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 93920, null]], "pdf_page_numbers": [[0, 1014, 1], [1014, 3876, 2], [3876, 7226, 3], [7226, 9045, 4], [9045, 11907, 5], [11907, 12938, 6], [12938, 16515, 7], [16515, 19894, 8], [19894, 23549, 9], [23549, 26671, 10], [26671, 30285, 11], [30285, 33822, 12], [33822, 37124, 13], [37124, 40568, 14], [40568, 42235, 15], [42235, 45803, 16], [45803, 49367, 17], [49367, 52752, 18], [52752, 54272, 19], [54272, 57806, 20], [57806, 59450, 21], [59450, 62975, 22], [62975, 66430, 23], [66430, 69656, 24], [69656, 72447, 25], [72447, 76153, 26], [76153, 79808, 27], [79808, 83498, 28], [83498, 87252, 29], [87252, 91164, 30], [91164, 93920, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 93920, 0.04714]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ebcc753b92894bb3b11bf44db64a33021a034d0c
|
Solver-based Approaches for Robust Multi-index Selection Problems with Stochastic Workloads and Reconfiguration Costs
Marcel Weisgut, Leonardo Hübscher, Oliver Nordemann and Rainer Schlosser
Hasso Plattner Institute, University of Potsdam, Potsdam, Germany
Keywords: Resource Allocation Problems, Stochastic Workloads, Index Selection, Robustness, Linear Programming.
Abstract: Fast processing of database queries is a primary goal of database systems. Indexes are a crucial means for the physical design to reduce the execution times of database queries significantly. Therefore, it is of great interest to determine an efficient selection of indexes for a database management system (DBMS). However, as indexes cause additional memory consumption and the storage capacity of databases is limited, index selection problems are highly challenging. In this paper, we consider a basic index selection problem and address additional features, such as (i) multiple potential workloads, (ii) different risk-averse objectives, (iii) multi-index configurations, and (iv) reconfiguration costs. For the different problem extensions, we propose specific model formulations, which can be solved efficiently using solver-based solution techniques. The applicability of our concepts is demonstrated using reproducible synthetic datasets.
1 INTRODUCTION
In this paper, we consider resource allocation problems in database systems using means of quantitative methods and operations research. Specifically, to be able to run database workloads efficiently, we optimize whether and where to store certain auxiliary data structures such as indexes.
1.1 Background
Indexes in a relational database system are auxiliary data structures used to reduce the execution time required for generating the result of a database query. The shorter the execution time of a workload’s query set, the more queries can be executed per time unit. Consequently, reducing query execution times implicitly increases the throughput of the database. Indexes are data structures that have to be stored in addition to the stored data of a database itself, which leads to additional memory consumption and increases the overall memory footprint of the database. Memory capacity is limited and, therefore, a valuable resource. For this reason, it is important to take the memory consumption into account for decision making about which indexes to store in the system’s memory.
For a single database query, multiple indexes may exist, each of which can improve the query execution time differently. Table 1 shows an exemplary scenario in which different combinations of indexes lead to different execution times of a single hypothetical example query. The first combination without any index leads to the longest execution time of the query with 500 milliseconds. The best execution time of 300 milliseconds can be achieved by using both index 1 and index 2. However, the best performing combination regarding the execution time also involves the largest memory footprint. The second-best solution from an execution time perspective results in an index memory consumption of only 40% of the optimal solution and is only about 17% slower. This simple example illustrates the need to consider the index memory consumption for selecting which indexes should be used.
In real-world database scenarios, a DBMS processes more than only a single query. Instead, a set of queries is executed on a database with a certain frequency in a specific time frame for each query. The
<table>
<thead>
<tr>
<th>Usage of index 1</th>
<th>Usage of index 2</th>
<th>Total memory footprint</th>
<th>Query execution time</th>
</tr>
</thead>
<tbody>
<tr>
<td>false</td>
<td>false</td>
<td>0 MB</td>
<td>500 ms</td>
</tr>
<tr>
<td>true</td>
<td>false</td>
<td>100 MB</td>
<td>350 ms</td>
</tr>
<tr>
<td>false</td>
<td>true</td>
<td>150 MB</td>
<td>400 ms</td>
</tr>
<tr>
<td>true</td>
<td>true</td>
<td>250 MB</td>
<td>300 ms</td>
</tr>
</tbody>
</table>
Table 1: Sample index combinations with their memory consumption and the resulting execution times of a hypothetical query.
set of queries with their frequencies is referred to as workload. Executing a workload using a selection of indexes has a certain performance. This performance is characterized by the total execution time of the workload and the selected indexes' memory consumption. The workload execution time should be as low as possible, and the index memory consumption must not exceed a specific memory budget.
An additional challenge of selecting the set of indexes that shall be present and used by queries is index interaction. "Informally, an index \( a \) interacts with an index \( b \) if the benefit of \( a \) is affected by the presence of \( b \) and vice-versa." (Schnaitter et al., 2009) For example, assume a particular index \( i \) for a subset \( S \) of the overall workload may provide the best performance improvement for each query in that subset. There is also no other index that has a better accumulated performance improvement. Suppose \( i \) now has such a high memory consumption that the available index memory budget is completely spent. In that case, no other index can be created. Therefore, only queries of the subset \( S \) are improved by index \( i \). Another index selection might be worse for the workload subset \( S \) but better for the overall workload. Consequently, a (greedily chosen) single index whose accumulated performance improvement is the highest is not necessarily in the set of indexes that provides the best performance improvement for the total workload.
1.2 Contribution
In this work, we present solver-based approaches to address specific challenges of index selection that occur in practice. Besides one basic problem, solution concepts for four extended problem versions are proposed. Our contributions are the following:
- We study solver-based approaches for single- and multi-index selection problems.
- We use a flexible chunk-based heuristic approach to attack larger problems.
- We consider extensions with multiple stochastic workload scenarios and reconfiguration costs.
- We derive risk-aware index selections using worst case and variance-based objectives.
- We use reproducible examples to test our approaches, which can be easily combined.
The remainder of this work is structured as follows. Section 2 summarizes related work. In Section 3, the various index selection problems are formulated, and their solutions are presented. Section 4 then briefly describes how the models were implemented. An evaluation of the developed models is given in Section 5. In Section 6, we discuss future work. Finally, Section 7 concludes this work.
2 RELATED WORK
Index recommendation and automated selection have been in the focus of database research for many years and are still important today, particularly in the rise of self-optimizing databases (Pavlo et al., 2017; Kossmann and Schlosser, 2020). Next, we give an overview index selection algorithms.
An overview of the historic development as well as an evaluation of index selection algorithms is summarized by Kossmann et al. (Kossmann et al., 2020). Current state-of-the-art index selection algorithms are, e.g., AutoAdmin (Chaudhuri and Narasayya, 1997), DB2Advis (Valentin et al., 2000), CoPhy (Dash et al., 2011), DTA (Chaudhuri and Narasayya, 2020), and Extend (Schlosser et al., 2019). All those selection approaches focus on deterministic workloads. Risk-aversion in case of multiple potential workloads is not supported. As typically iterated or recursive methods are used, it is not straightforward how they have to be amended to address the extensions considered in this paper, such as multiple workloads, risk-aversion, or transition costs.
Early approaches tried to derive optimal index configurations by evaluating attribute access statistics (Finkelstein et al., 1988). Newer index selection approaches are mostly coupled with the query optimizer of the database system (Kossmann et al., 2020). By doing so, the costs models of the index selection algorithm and the optimizer are the same. As a result, the benefit of considered indexes can be estimated consistently (Chaudhuri and Narasayya, 1997). As optimizer invocations are costly, especially for complex queries, along with improved index selection algorithms, techniques to reduce and speed up optimizer calls have been developed (Chaudhuri and Narasayya, 1997; Papadomanolakis et al., 2007; ?).
An increasing number of possible optimizer calls for index selection algorithms opens the possibility to investigate an increasing number of index candidates. Compared to greedy algorithms (Chaudhuri and Narasayya, 1997; Valentin et al., 2000), approaches using mathematical optimization are able to efficiently evaluate index combinations. In this context, we perceive a shift away from greedy algorithms (Chaudhuri and Narasayya, 1997; Valentin et al., 2000) towards approaches using mathematical optimization models and methods of operations research, especially integer linear programming (ILP) (Casey, 1972; Dash et al., 2011). A major challenge
of these solver-based approaches is to deal with the increasing complexity of integer programs. An obvious solution is reducing the number of initially considered index candidates, which may, however, reduce the solution quality.
Alternatively, also machine learning-based approaches for index selection are an emerging research direction. For example, deep reinforcement learning (RL) have already been applied, e.g., (Sharma et al., 2018) or (Kossmann et al., 2022). Such approaches, however, require extensive training, and are still limited with regard to large workloads or multi-attribute indexes, and do not support risk-averse optimization criteria.
### 3 SOLUTION APPROACH
In this section, a basic index selection problem and its extensions are formulated, and the solutions for each problem are presented. Section 3.1 describes a basic index selection problem, which is considered the basic problem in this work. In addition to the problem’s description, we formulate an integer linear programming model, which can solve this problem. Sections 3.2, 3.3, 3.4, and 3.5 each describe an extension of the basic problem and explain which adjustments can be made to the solution of the basic problem to solve the specialized problems. Finally, Section 3.6 describes the problem in which all advanced problems were combined.
#### 3.1 Basic Problem
In this subsection, we first describe a basic version of the index selection problem, which resembles typical properties. The basic index selection problem is about finding a subset of a given set of index (multi-attribute) candidates used by a hypothetical database to minimize the total execution time of a given workload. The given workload consists of a set of queries and a frequency for each query. A query can use no index or exactly one index for support. Different indexes induce different improvements for a single query. As a result, the execution time of a query highly depends on the used index. A query has the longest execution time if no index is used. For each query, it has to be decided whether and which index is to be used. Only if at least one query uses an index, the index can belong to the set of selected indexes. Each index involves a certain amount of memory consumption. The total memory consumption of the selected indexes must not exceed a predefined index memory budget.
#### Table 2: Basic parameters and decision variables.
<table>
<thead>
<tr>
<th>Designation</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>( I )</td>
<td>parameter</td>
<td>number of indexes</td>
</tr>
<tr>
<td>( Q )</td>
<td>parameter</td>
<td>number of queries</td>
</tr>
<tr>
<td>( M )</td>
<td>parameter</td>
<td>index memory budget</td>
</tr>
<tr>
<td>( t_{q,i} )</td>
<td>parameter</td>
<td>execution time of query ( q = 1, \ldots, Q ) using index ( i = 0, \ldots, I; i = 0 ) indicates that no index is used by query ( q )</td>
</tr>
<tr>
<td>( m_i )</td>
<td>parameter</td>
<td>memory consumption of index ( i = 1, \ldots, I )</td>
</tr>
<tr>
<td>( f_q )</td>
<td>parameter</td>
<td>frequency of query ( q = 1, \ldots, Q )</td>
</tr>
<tr>
<td>( u_{q,i} )</td>
<td>decision variable</td>
<td>binary variable whether index ( i = 0, \ldots, I ) is used for query ( q = 1, \ldots, Q; i = 0 ) indicates that no index is used by query ( q )</td>
</tr>
<tr>
<td>( v_i )</td>
<td>decision variable</td>
<td>binary variable whether index ( i = 1, \ldots, I ) is used for at least one query</td>
</tr>
</tbody>
</table>
The objective (1) minimizes the execution time of the overall workload taking into account the index usage for queries, the index-dependent execution times,
and the frequency of queries. The constraint (2) ensures that the selected indexes do not exceed the given memory budget \( M \). Constraint (3) ensures that a maximum of one index is used for a single query. Here, a unique option has to be chosen including the no index option. Thus, if \( u_{q,i} \) with \( i = 0 \) is true, no index is used for query \( q \). The constraints (4) and (5) are required to connect \( u_{q,i} \) with \( v_i \). If no query uses a specific index \( i \), constraint (4) ensures that \( v_i \) is equal to 0 for that index. If at least one query uses index \( i \), constraint (5) ensures that \( v_i \) is equal to 1 for that index.
**3.2 Chunking**
The number of possible solutions of the index selection problem grows exponentially with the number of index candidates. Databases for modern enterprise applications consist of hundreds of tables and thousands of columns. This leads to long execution times to find the optimal solution of the increasing problem. In this extension, the set of possible indexes is split into chunks of indexes. The index selection problem will then be solved via (1) - (5) only with the reduced set of indexes for each chunk, and the indexes of the optimal solution will be returned. After solving the problem for each chunk, the best indexes of each chunk will get on. In the second round, the reduced number of remaining indexes will be used for a final selection using again (1) - (5).
The approach allows an effective problem decomposition and accounts for index interaction. Naturally, chunking remains a heuristic approach as it does not guarantee an optimal solution, but the main advantage is to avoid large problems. Of course chunks should not be chosen too small as splitting the global problem into too many local problems can also add overhead (see the evaluations presented in Section 5.4). An other advantage is that, in the first round, all the chunks could be solved in parallel. With this, the overall execution time could be reduced even further.
**3.3 Multi-index Configuration**
Our basic problem introduced in Section 3.1 could not handle the interaction of indexes as described in the introduction: One query could be accelerated by more than one index and the performance gain of an index could be affected by other indexes. We tackle this part of the index selection problem by adding one level of indirection called index configurations.
An index configuration maps to a set of indexes. Assuming the index selection problem has ten indexes, then the first configuration (configuration 0) means that no index is used for this query. The next ten possible configurations point to the respective indexes, e.g., configuration 1 points to index 1, configuration 2 points to index 2, configuration 10 points to index 10. The subsequent configurations map to sets containing combinations of two indexes. Database queries could be accelerated by more than two indexes, but we simplified the configurations in our implementation so that they can consist of a maximum of two different indexes. We use a binary parameter \( d_{c,i} \) indicating whether configuration \( c \) contains the index \( i \). \( c = 0, \ldots, C \) and \( i = 0, \ldots, I \) with \( C \) being the number of index configurations and \( I \) being the number of indexes. Furthermore, we assume that ten percent of all possible index combinations will interact in configurations. Our approach to index selection works with configurations in the same way as with indexes, cf. (1) - (5). The constraints (3) - (5) of the basic problem, cf. Section 3.1, are adapted in the following way for multi-index configurations:
\[
\sum_{c=0}^{C} u_{q,c} = 1, \quad q = 1, \ldots, Q \tag{6}
\]
\[
\sum_{q=1}^{Q} u_{q,c} \cdot d_{c,i} \geq v_i, \quad c = 1, \ldots, C \quad i = 1, \ldots, I \tag{7}
\]
\[
\frac{1}{Q} \sum_{q=1}^{Q} u_{q,c} \cdot d_{c,i} \leq v_i, \quad c = 1, \ldots, C \quad i = 1, \ldots, I \tag{8}
\]
Again, the binary variable \( u_{q,c} \) is used to control if configuration \( c \) is used for query \( q \) and \( C \) is the number of index configurations. Similar to the basic approach, \( c = 0 \) represents a configuration that contains no index. Constraint (6) ensures that one single query uses exactly one configuration option instead of one index. In constraint (7) - (8), the parameter \( d_{c,i} \) is included to activate indexes of used configurations.
**3.4 Stochastic Workloads**
Until now, we considered a single given workload only. However, in the context of enterprise applications, we could imagine that each day of the week has a different workload. For example, the workloads on the weekend could contain fewer requests compared to a workload during the week. In this section, we propose an approach that can take multiple workloads into account. The solution seeks to provide a robust index selection, where robust means that the performance is good no matter which workload may occur.
First, the expected total workload cost \( T \) across all workloads is being calculated as
\[
T = \sum_{w=1}^{W} g_w \cdot \frac{k_w}{\sum_{w=1}^{W} k_w} \tag{9}
\]
where $W$ is the number of different workloads. To describe workload probabilities, we use the intensities $k_w$, $w = 1, \ldots, W$. Further, the execution time $g_w$ of a workload $w$ is determined by (10), with $f_w q$ being the frequency of query $q$ in workload $w$, $w = 1, \ldots, W$, i.e.,
$$g_w = \sum_{q=1}^{Q} u_q c_q f_w q$$
(10)
The information whether a configuration $c$ is being used for a query $q$ of a workload $w$ is shared between the workloads, leaving it to the solver to minimize the total costs across all workloads.
Figure 1 shows exemplary total workload costs when minimizing the global execution time. It can be seen that the actual costs of each workload differ a lot, leading to poor performances for $W$ and $W$ in favor of $W$ and $W$. We use two different approaches to make the index selection more robust.
The first one includes the worst-case performance by punishing the total costs with the maximum workload costs as additional costs. The maximum workload costs $L$ (modelling as a continuous variable) are determined by the constraint:
$$L \geq g_w \ \forall w = 1, \ldots, W$$
(11)
The following (mixed) ILP, cp. (1) - (5), now includes this maximum workload cost ($L$) in the objective using the penalty factor $a \geq 0$, cf. (9) - (10):
$$\text{minimize} \quad T + a \cdot L \quad \text{subject to} \quad v_c u_q \in \{0,1\}^{l_1+Q(l_1+1)}, l_1 \in \mathbb{R}$$
(12)
Figure 2 illustrates a typical solution leading to better worst-case cost, cp. Figure 1. However, the costs of $W$ and $W$ increased, leaving also a bigger gap between $W$ and the rest.
To obtain robust performances, the second approach uses the variance $V$, cf. (9) - (10),
$$V = \sum_{w=1}^{W} (g_w - T)^2 \cdot \frac{k_w}{\sum_{w=2}^{W} k_w}$$
(13)
Remark that the problem, cf. (14), becomes a binary quadratic programming (BQP) problem by using the variance $V$ in the penalty term. Using the mean-variance criterion (14) typically leads to results illustrated in Figure 3. Typically, all costs are now within a similar range. However, in comparison to the previous figures, not only have been $W$, $W$ and $W$ brought into a plannable range, but also $W$. The total costs of $W$ may not be reduced, which makes the result indeed more robust but less effective in the end. A third option to resolve this issue would be to use the semi-variance instead of $V$. Similar to the variance, the semi-variance can be used to penalize only those workloads whose costs are higher than the mean cost of all workloads, i.e., workloads with lower costs would not increase the applied penalty.
Finally, the proposed risk-averse approaches enables us to use potential workloads (e.g., observed in the past) to optimize index selections for stochastic future scenarios under risk considerations.
Table 3: Transition cost calculation example.
<table>
<thead>
<tr>
<th>#</th>
<th>mkᵢ</th>
<th>rmᵢ</th>
<th>vᵢ</th>
<th>vᵢ⁺</th>
<th>RM + MK</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>50</td>
<td>10</td>
<td>1</td>
<td>1</td>
<td>0 (keep)</td>
</tr>
<tr>
<td>2</td>
<td>20</td>
<td>5</td>
<td>0</td>
<td>1</td>
<td>20 (create)</td>
</tr>
<tr>
<td>3</td>
<td>100</td>
<td>30</td>
<td>1</td>
<td>0</td>
<td>30 (remove)</td>
</tr>
<tr>
<td>4</td>
<td>10</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0 (skip)</td>
</tr>
</tbody>
</table>
Total transition costs: 50
3.5 Transition Costs
In the previous subsections, we showed how to deal with different workloads, e.g., on consecutive days. In this problem extension, we consider the costs of a transition from one index configuration to another. We assume that the database removes indexes that are no longer used and loads indexes that are to be used into the memory. Typically, the database would need to do some I/O operations, which are time expensive and generate additional costs. We model such creation and removal costs in our final extension to reduce such transition costs.
To adapt the index configurations, the algorithm identifies the differences between the previous configuration (now characterized by parameters \( v_i \)) and a new target configuration governed by the variables \( v_i^+ \). For each removal at index \( i \), the algorithm looks up the removal costs \( rm_i \) for index \( i \) and adds them to the total removal costs \( RM \). Analogous, the algorithm proceeds to calculate the total creation costs \( MK \) using the creation costs \( mk_i \) of the index \( i \). The sum of the removal costs and creation costs is then being added to any of the previous objectives, which allows to avoid high transition costs. The costs can be modelled linearly:
\[
RM = \sum_{i=1,...,I} v_i^+ \cdot (1 - v_i) \cdot rm_i \\
MK = \sum_{i=1,...,I} v_i \cdot (1 - v_i^+) \cdot mk_i
\]
(15) (16)
Table 3 describes an explanatory calculation of the transition costs between two index selections. The previous selection \((v^*)\) uses the indexes 1 and 3. The new selection \((v)\) uses index 1 and 2. The resulting transition costs are 50.
3.6 Combined Problem
All extensions described in the previous subsections were developed on top of the basic approach. To show the encapsulation of all extensions, we also implemented a solution that integrates all extensions in an all-in-one solution. In most of the cases, this is straightforward as all key concepts are independent from each other and no coupling is involved. However, when combining multiple components the model’s complexity increases. Hence, we recommend to use only those features that are really needed in specific applications.
4 IMPLEMENTATION
To evaluate the described approaches, we implemented our different models using AMPL\(^1\), which is a modeling language tailored for working with optimization problems (Fourer et al., 2003). The syntax of AMPL is quite close to the algebraic one and should be easy to read and understand, even for the readers, who have never seen AMPL syntax before. AMPL itself translates the given syntax into a format solvers can understand.
The solver is a separate program that needs to be specified by the developer. The approach is based on linear/quadratic programming using integer numbers. Both solvers, CPLEX\(^2\) and Gurobi\(^3\), are suited to solve the index selection task. A first test showed that Gurobi is faster than CPLEX in most cases, which is the reason why we used Gurobi.
The .run-file contains information about the selected solver and loads the specified model and data specifications. After a given problem was solved, the solution is displayed. The .mod-file contains the description of the mathematical model, such as parameters, constraints, and objectives used. The input data, which is required for solving a certain problem, is specified in the .dat-file. All code files are available at GitHub\(^4\). Our implementation in AMPL allows the reader to evaluate the different approaches and to reproduce its results, see Section 5.
5 EVALUATION
In this section, we evaluate our approach. The considered setup and the input data are described in Section 5.1 and Section 5.2, respectively. In Section 5.3, we reflect on the scalability of the basic approach. Then, in Section 5.4, we investigate when index chunking is beneficial for the performance compared to the basic approach and reflect on the cost
---
1https://AMPL.com/
2https://www.ibm.com/de-de/analytics/cplex-optimizer
3https://www.gurobi.com/de/
4https://github.com/mweisgut/DDDM-index-selection
trade-off that the heuristic entails. Afterward, in Section 5.5, we determine the computational overhead of the multi-index extension. Lastly, in Section 5.6, we take a more in-depth look into the stochastic workload extension, evaluating the impact of the different robustness measures and the trend of costs depending on the number of potential workloads.
5.1 Evaluation Setup
All performance measurements were performed on the same machine, featuring an Intel i5 8th generation (4 cores) and 8GB memory storage. All measurements were repeated three times. For each time measurement, we used the AMPL build-in function _total_solvetime_. It returns the user and system CPU seconds used by all solve commands.
The final value was determined by the mean of all three measurement results. All non-related applications have been closed to reduce any side effects of the operating system.
5.2 Datasets
The datasets that are being used for the evaluation are being generated randomly, using multiple fixed random seeds. Each dataset is defined by the number of indexes, queries, and available memory budget. The algorithm provided in the index-selection.data-file then generates the execution time of each query, depending on the utilized index. Firstly, the “original” execution time for the query without using an index are chosen randomly within the interval [10; 100]. Based on the drawn costs, the speedup for each index is calculated by choosing a random value between the “original” costs and a 90% speedup. The memory consumption of a query can be an integer between 1 and 20. The frequencies can be between 1 and 1 000.
The extensions that are applied on top of the basic approach introduce further variables that need to be generated. For the stochastic workload extension, we introduced a workload intensity, which gets drawn randomly for each workload. This also applies to the transition cost extension, where the creation costs and removal costs are random. The multi-index configurations package requires a more complex generation process since each index configuration should be a unique set of indexes. The configuration zero represents the option that no index is being used. The configurations 1 to 1 point to their respective single index. All other generated configurations consist of up to two indexes, whereas the combinations are drawn randomly. By using a second data structure, it is ensured that no index combination is used multiple times. The speedup s for a combination, existing of two indexes i and j, is then calculated by the following formulas:
\[
\min_{i,j} \text{speedup}_{i,j} = \max(s_i, s_j) \tag{17}
\]
\[
\max_{i,j} \text{speedup}_{i,j} = s_i + s_j \tag{18}
\]
The minimum speed up and the maximum speed up are then passed to a function that returns a uniformly distributed random number within the interval \([\min_{i,j} \text{speedup}_{i,j}; \max_{i,j} \text{speedup}_{i,j}]\), cf. (16) - (17).
Outsourcing the generation of input data into the index-selection.data-file allows for an easy replacement with actual data, e.g., benchmarking data of a real system. However, this also enables the reader to validate basic example cases on their own.
5.3 Basic Index Selection Solution
"The complexity of the view- and index-selection problem is significant and may result in high total cost of ownership for database systems." (Kormilitzin et al., 2008) In this section, we evaluate our basic solution, cf. Section 3.1. We show the scalability of our implemented solution, which we later compare to the chunking approach. We set up the memory limit with 100 units and assume 100 queries with random occurrences uniform between 1 and 1 000. To test the scalability on our machine, we generate data with 50, 100, 150, ..., 1 450, 1 500 index candidates. The outcome measurements are shown in Figure 4. The total solve time on the y-axis is a logarithmic scale.
Figure 4 shows the growing total solve time while the number of indexes rises. With an increasing index
count, the execution times vary more. Naturally, the solve time depends on the specific generated data input. In some cases with over 1,000 indexes, the generated input could not be solved with our setup in a meaningful time. Note that the possible combinations of the index selection problem grow exponentially.
In order to limit the number of index candidates, one might only consider smaller subsets of a workload’s queries that are responsible for the majority of the workload costs.
5.4 Index Chunking
To tackle the exponential growing number of admissible index combinations, we divide the problem into chunks, find the best indexes of each chunk, and then find the best index of the winners of all chunks, cf. Section 3.2. Compared to the basic index selection solution, problems with much more indexes can be solved with chunking. Figure 5 shows the total solve time with chunking in orange and without chunking in blue. The other parameters were fixed (see previous Section 5.3). The orange dots of the chunking approach show a linear relationship between an index count of 500 to 2,500. In the beginning, the total solve time of the chunking curve has a higher gradient. The overhead introduced by chunking to split the indexes into chunks has not always a positive impact on the total execution time of our linear program. The chunking solution has a lower scattering than the basic solution. In each execution, some chunks could be solved faster than the mean and some other chunks need more time. The long and short solve times of single chunks balance each other and chunking leads to lower variations.
As described in Section 3.2, the heuristic chunking approach might cause the final solution not to be an optimal global solution of the initial problem. In this context, Table 4 shows the total cost growth of the found solution compared to the optimal solution. The lower the chunk count, the higher the mean and the maximum growth. With a lower chunk count, the possibility that an index of the optimal solution is not a winner of the related chunk is higher. The more chunks, the more indexes get on to the final round.
Table 4: Total costs growth with different numbers of chunks compared to the optimal solution in percent (%).
<table>
<thead>
<tr>
<th># Chunks</th>
<th>Mean growth</th>
<th>Max growth</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>0.49 %</td>
<td>0.92 %</td>
</tr>
<tr>
<td>10</td>
<td>0.34 %</td>
<td>0.65 %</td>
</tr>
<tr>
<td>20</td>
<td>0.20 %</td>
<td>0.65 %</td>
</tr>
<tr>
<td>50</td>
<td>0.07 %</td>
<td>0.38 %</td>
</tr>
</tbody>
</table>
Figure 5: Execution times in seconds of basic solutions and with different numbers of chunks (lower is better).
Chunking reduces the total solve time and fewer outliers with very long execution times occur. The degradation of the calculated solution is surprisingly low. We observe that the total workload costs growth is consistently lower than 1%.
5.5 Multi-index Configuration
In this section, we evaluate the potential solve time overhead, which might get introduced by the multi-index configuration extension, see Section 3.3. Therefore, we compare the solve times of the extension with the basic approach. For both implementations, we tested multiple settings. The number of indexes defines a setting and is one of 10, 50, 100, 200, 500, or 1,000. Independent of the setting, we assume a memory budget of 100 units and 100 queries. Figure 6 shows the solve time for both settings in comparison.
Figure 6: Execution times in seconds of the basic approach and the multi-index configuration extension in comparison.
Overall, as expected, the multi-index configuration extension has an execution time overhead compared to the basic solution, which assigns at most only one index to each query. However, the additional required solve time is acceptable. Further, the relative solution time overhead decreases with an increasing number of indexes. One explanation for this effect could be that increasing the number of index candidates also increases the number of dominated indexes excluded by the (pre)solver.
5.6 Stochastic Workloads
In enterprise applications, we have different use cases which produce different workloads for database systems. Each use case has different requirements. The output of some workloads is needed within a defined time, so we could set a maximum execution time as upper barrier for a workload. In other use cases, the different workload costs should be robust without major deviations. Therefore, we minimize the variance between the costs of different workloads.
Furthermore, in other use cases, we do not need robust workloads and the minimization of the total costs is the best decision for database systems. In this section, we compare different objective functions.
The next evaluated index selection problem has \( I = 20 \) indexes, \( Q = 20 \) queries, \( M = 30 \) available memory, and four different workloads. The \( W = 4 \) different workloads occur with different workload intensities, cf. \( k_w \), see Section 3.4:
\[
95 \times W_1 \quad 19 \times W_2 \quad 45 \times W_3 \quad 7 \times W_4
\]
We solve this problem with the following different optimization criteria: minimize the expected total costs \( T \), cf. (9), minimize the pure worst case cost \( L \) (\( a \rightarrow \infty \)), cf. (11), and minimize the pure variance \( V \) (\( b \rightarrow \infty \)), cf. (13). Note, we use these special case criteria of the proposed mean-risk approaches to emphasize their impact. For the 4th criterion, we exemplary combine the three different criteria via the following weighting factors, cp. (9), (11), (13):
\[
\begin{align*}
\text{minimize} & \quad 100 \cdot T + 100 \cdot L + 1 \cdot V \\
\text{subject to} & \quad u_q \in [0, 1]^{Q \times (I + 1)}; L \in \mathbb{R}
\end{align*}
\]
Figure 7 shows the different costs per workload for the four objectives mentioned before. Each bar shows the costs of one workload.
When minimizing the worst-case costs (top left in Figure 7), the costs per workload vary between 129 017 and 154 602. When minimizing the total costs (bottom left), the costs for each workload range from 102 907 to 162 762. The deviation between the different workloads are higher than the workload costs optimized with an upper barrier. The minimization of the variance objective (top right) harmonizes the four single workload costs. Each bar seems to have the same height. The cost per workload is between 236 571 and 236 611. Thus, the costs for each workload are significantly higher than any other approach. The combination objective (bottom right) shows a much smaller deviation than the the minimization of the worst case and the total costs. With this objective, the costs per workload are between 166 480 and 172 912.
An index selection calculated with the variance minimization strategy leads to a fair workload cost distribution. However, this extreme approach leads to overall high workload costs as the mean is not part of the objective. The problem is that workloads with lower costs than the mean workload costs worsen the value of the objective function. Naturally, the quadratic objective of the BQP adds some complexity. The total solve time of the pure variance optimization was comparably large (248 s). However, the solve time for the combined objective was only 0.78 s. Clearly, the total expected costs model and the worst case costs model had the fastest solution times.
Table 5: Results of the four different objectives regarding the following performance metrics: worst workload costs, variance of the workload costs, and (expected) total costs.
<table>
<thead>
<tr>
<th>Objective</th>
<th>Worst-case costs</th>
<th>Cost variance</th>
<th>Total exp. costs</th>
</tr>
</thead>
<tbody>
<tr>
<td>worst case</td>
<td>$\sim 155 , k$</td>
<td>$-8.5 , G$</td>
<td>$22.4 , M$</td>
</tr>
<tr>
<td>variance</td>
<td>$237 , k$</td>
<td>$6 , 188 , G$</td>
<td>$39.3 , M$</td>
</tr>
<tr>
<td>total exp. costs</td>
<td>$163 , k$</td>
<td>$35 , G$</td>
<td>$19.5 , M$</td>
</tr>
<tr>
<td>combination</td>
<td>$173 , k$</td>
<td>$-0.3 , G$</td>
<td>$28.0 , M$</td>
</tr>
</tbody>
</table>
Table 5 shows the metrics of each optimization approach. The optimal total costs are 19 539 400. The worst-case optimization leads to a growth of about 14.5% compared to the total cost optimization. The variance optimization has by far the highest total costs, but the variance is the smallest. Compared to the optimized variance, the worst-case optimization has a variance that is six orders of magnitude worse, and the total cost optimization has a variance that is seven orders of magnitude worse. If we optimize the worst case, we get W4 as the worst workload with costs of 154 602. With the total costs optimization, the worst case is only 5% higher. The combination is also only 11% higher than the minimal worst case, but the variance approach’s worst case is 53% higher.
The variance solution is a fair solution for all workloads. However, for database systems, it can be more important to execute the workloads as fast as possible. Ultimately, it is up to the decision maker to decide on an appropriate objective that meets the desired outcomes.
Some workloads should be executed in a specific time frame because a user waits on the results. In this case, an optimization of the worst case is helpful. Another opportunity is to add a constraint for this single workload to tweak the total cost optimization. However, the application needs to specify this requirement and inform the database system in some way. If it is not important that some workloads should be executed within a maximum cost range, the total cost optimization strategy is the best one, because it reduces the total cost of ownership of database systems.
The worst-case optimization and the total costs optimization have similar performance indicators. Both have their specific advantages and are good optimization strategies for the index selection problem.
6 FUTURE WORK
In Section 3.4, we presented alternative objectives that minimize an upper bound to optimize the execution time of the worst workload or use the mean-
variance criterion to achieve robust execution times with small deviations. Note that optimizing the mean-variance criterion also penalizes execution times that are better than the average execution time. However, short execution times are desirable from a database perspective. Alternatively, by using mean-semivariance criteria, one could only penalize the execution times that are higher than the average execution time. As further risk-averse objectives also utility functions could be used, where the associated non-linear objectives could be resolved using piece-wise linear approximations.
In this work, the created models were evaluated using randomly generated synthetic data. In further experiments, the models could also be evaluated with data from real database scenarios to obtain more information on the quality and practical applicability of the proposed models. For this purpose, our implementation could be executed for real database benchmark workloads (e.g., data of the TPC-H or TPC-DS benchmark). A database that supports what-if optimizer calls should be used to anticipate performance improvements of the potential use of individual indexes and to obtain the required model input data (i.e., cost values and memory consumption).
Further evaluations might further investigate the scalability of the chunking approach as well as the impact of (i) the assignment of similar indexes to the same chunk and (ii) a chunk’s storage capacity, which allows to increase or decrease the number of indexes to be excluded, and in turn, affects both the overall solution quality and the runtime. The results should allow to recommend storage capacities and chunk sizes for given workloads.
Finally, our different proposed concepts and approaches should be not only compared to classical (risk-neutral) index selection approaches (for deterministic workloads) but particularly to approaches that are also capable of addressing risk-averse objectives in the presence of multiple potential future workloads as well as transition costs. As such evaluations require the simulation and evaluation of more complex stochastic dynamic workload realizations, we leave such experiments to future work.
7 CONCLUSION
In this work, we considered different variants of index selection problems and proposed solver-based solution concepts. In the basic model, we take one workload consisting of a set of queries and their frequencies into account and decide which subset of indexes to select under a given budget constraint.
In the extended chunking approach, we divided the overall index selection problem into multiple smaller sub-problems, which are solved individually. The selected indexes of these sub-problems are then put together and the best selection among these candidates will be determined in a final step. We showed that, compared to the optimal solution of the basic problem, this heuristic performs near-optimal and allows to significantly reduce the overall solution time.
For the multi-index configuration extension, the granularity of the possible options was changed from the index level to the index configuration level, where each configuration represents a combination of indexes (e.g., a maximum of two). We showed that our formulation is viable for standard solvers. The results show that the execution time overhead is substantial in small scenarios but decreases with an increasing number of indexes.
The extension to stochastic workloads takes multiple workload scenarios into account. Such different scenarios may be derived from historical data within specific time spans. In this framework, different objectives were used to minimize: (1) the total workload costs, (2) the worst-case workload costs, (3) a mean-variance criterion, and (4) a weighted combination of the first three objectives. Our results show that the targeted effect to avoid bad and uneven performances is achieved.
In the fourth extension with transition costs, we addressed the additional challenge to create and remove indexes in the presence of an existing configuration while balancing performance and minimal required reconfiguration costs. In our approach, we used an extended penalty-based objective to endogenize creation and removal costs. We find that involving transition costs makes it possible to identify minimal-invasive reconfigurations of index selections, which helps to manage them over time, e.g., under changing workloads.
Finally, our concepts, i.e., chunking, multi-index configurations, stochastic workloads, and transition costs, are designed such that they can be combined.
REFERENCES
## Table 6: List of parameters and variables.
<table>
<thead>
<tr>
<th>PARAMETERS</th>
<th>DESCRIPTION</th>
</tr>
</thead>
<tbody>
<tr>
<td>$C$</td>
<td>number of index configurations</td>
</tr>
<tr>
<td>$I$</td>
<td>number of indexes</td>
</tr>
<tr>
<td>$M$</td>
<td>index memory budget</td>
</tr>
<tr>
<td>$Q$</td>
<td>number of queries</td>
</tr>
<tr>
<td>$W$</td>
<td>number of workloads</td>
</tr>
<tr>
<td>$a$</td>
<td>maximum workload costs penalty factor</td>
</tr>
<tr>
<td>$b$</td>
<td>variance penalty factor</td>
</tr>
<tr>
<td>$d_{c,i}$</td>
<td>binary parameter whether configuration $c = 0, \ldots, C$ contains the indexes $i = 1, \ldots, I$</td>
</tr>
<tr>
<td>$f_q$</td>
<td>frequency of query $q = 1, \ldots, Q$</td>
</tr>
<tr>
<td>$f_{w,q}$</td>
<td>frequency of query $q = 1, \ldots, Q$ in workload $w = 1, \ldots, W$</td>
</tr>
<tr>
<td>$k_w$</td>
<td>intensity of workload $w = 1, \ldots, W$</td>
</tr>
<tr>
<td>$m_i$</td>
<td>memory consumption of index $i = 1, \ldots, I$</td>
</tr>
<tr>
<td>$s_i$</td>
<td>speedup of index $i = 1, \ldots, I$ in contrast to no index being used</td>
</tr>
<tr>
<td>$t_{q,i}$</td>
<td>execution time of query $q = 1, \ldots, Q$ using index $i = 0, \ldots, I$; $i = 0$ indicates no index is used</td>
</tr>
<tr>
<td>$m_{ki}$</td>
<td>creation costs of the index $i = 1, \ldots, I$</td>
</tr>
<tr>
<td>$r_{mi}$</td>
<td>removal costs of the index $i = 1, \ldots, I$</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>VARIABLES</th>
<th>DESCRIPTION</th>
</tr>
</thead>
<tbody>
<tr>
<td>$T$</td>
<td>total expected execution time of all workloads</td>
</tr>
<tr>
<td>$L$</td>
<td>maximum workload costs (worst case)</td>
</tr>
<tr>
<td>$V$</td>
<td>variance of execution times</td>
</tr>
<tr>
<td>$MK$</td>
<td>total creation costs</td>
</tr>
<tr>
<td>$RM$</td>
<td>total removal costs</td>
</tr>
<tr>
<td>$g_w$</td>
<td>execution time of a workload $w = 1, \ldots, W$</td>
</tr>
<tr>
<td>$u_{q,c}$</td>
<td>binary variable whether configuration $c = 0, \ldots, I$ is used for query $q = 1, \ldots, Q$; $c = 0$ represents an empty configuration with no indexes</td>
</tr>
<tr>
<td>$u_{q,i}$</td>
<td>binary variable whether index $i = 0, \ldots, I$ is used for query $q = 1, \ldots, Q$; $i = 0$ indicates that no index is used by query $q$</td>
</tr>
<tr>
<td>$v_i$</td>
<td>binary variable whether index $i = 1, \ldots, I$ is used for at least one query</td>
</tr>
<tr>
<td>$v_{ir}$</td>
<td>binary variable whether index $i = 0, \ldots, I$ was used previously used for at least one query and thus is already created</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.scitepress.org/Papers/2022/108006/108006.pdf", "len_cl100k_base": 11089, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42662, "total-output-tokens": 12308, "length": "2e13", "weborganizer": {"__label__adult": 0.00038313865661621094, "__label__art_design": 0.0005164146423339844, "__label__crime_law": 0.0004677772521972656, "__label__education_jobs": 0.002288818359375, "__label__entertainment": 0.0001379251480102539, "__label__fashion_beauty": 0.0002570152282714844, "__label__finance_business": 0.0019130706787109375, "__label__food_dining": 0.0004830360412597656, "__label__games": 0.0007147789001464844, "__label__hardware": 0.0016222000122070312, "__label__health": 0.00116729736328125, "__label__history": 0.0005393028259277344, "__label__home_hobbies": 0.00017845630645751953, "__label__industrial": 0.0012807846069335938, "__label__literature": 0.0003504753112792969, "__label__politics": 0.0003724098205566406, "__label__religion": 0.0006055831909179688, "__label__science_tech": 0.466796875, "__label__social_life": 0.0001156926155090332, "__label__software": 0.0214080810546875, "__label__software_dev": 0.4970703125, "__label__sports_fitness": 0.0003044605255126953, "__label__transportation": 0.0007824897766113281, "__label__travel": 0.0002791881561279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49979, 0.04065]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49979, 0.27115]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49979, 0.88445]], "google_gemma-3-12b-it_contains_pii": [[0, 4187, false], [4187, 9226, null], [9226, 12720, null], [12720, 17878, null], [17878, 20691, null], [20691, 25134, null], [25134, 29146, null], [29146, 32665, null], [32665, 36530, null], [36530, 41688, null], [41688, 46571, null], [46571, 49979, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4187, true], [4187, 9226, null], [9226, 12720, null], [12720, 17878, null], [17878, 20691, null], [20691, 25134, null], [25134, 29146, null], [29146, 32665, null], [32665, 36530, null], [36530, 41688, null], [41688, 46571, null], [46571, 49979, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49979, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49979, null]], "pdf_page_numbers": [[0, 4187, 1], [4187, 9226, 2], [9226, 12720, 3], [12720, 17878, 4], [17878, 20691, 5], [20691, 25134, 6], [25134, 29146, 7], [29146, 32665, 8], [32665, 36530, 9], [36530, 41688, 10], [41688, 46571, 11], [46571, 49979, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49979, 0.25911]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
73bdd9c927a39f1b11094cb4da934587577e3732
|
RDM Reference Manual
Grant E. Weddell
Department of Computer Science
University of Waterloo
Waterloo, Canada, N2L 3G1
Research Report CS-89-41
RDM Reference Manual
Grant E. Weddell
Research Report CS-89-41
September 1989
ABSTRACT
The Resident Database Manager (RDM) is a software tool set for applications that manipulate memory-resident databases which is currently under development in the multimedia laboratory of the University of Waterloo. At present, the tool set consists of two compilers. The first produces access and representation code in a language called PDM from source that specifies the organization of data, and the manner in which it is accessed and changed. The input source for this compiler is expressed in a language called LDM. PDM and LDM are acronyms for Physical Data Model and Logical Data Model respectively. Output from this first compiler can be input to a second compiler, together with applications written in an extended C language called C/DB. C/DB has additional language constructs that permit the direct use of data access specifications written in LDM. The result of this second compiler is pure C code that can then be compiled with a standard C compiler.
The first section begins with an overview of LDM, and then illustrates its use in implementing a reachability algorithm for directed graphs. Section I concludes with a summary of limitations existing on the present tool set. The remaining two sections define the LDM language and the extensions to C in C/DB respectively.
1. INTRODUCTION
1.1. Overview
Almost any software system will have components that access and update information residing in main-memory. For example, a language compiler has procedures to maintain the so-called symbol table, which is essentially a memory-resident database of parsed source code, intermediate code, and so on. This manual is about a software tool set that helps with the development of such components in a way similar to how Yacc, for example, helps with the development of other components responsible for parsing.
This research was supported in part by the Natural Sciences and Engineering Research Council of Canada, Bell-Northern Research Ltd., and the University of Waterloo.
The tool set presently consists of two compilers, and is called RDM, an acronym for Resident Database Manager. Input to the first compiler is a list of specifications in an object-oriented database language called LDM, another acronym for Logical Data Model. Most specifications are an instance of one of four sublanguages that characterize four categories of information about a database. Their main features are as follows:
- A data definition sublanguage (DDL) is used to describe the logical organization of data. The DDL manifests a data model that generalizes the relational model in two ways. First, the notions of relation and domain are combined into a notion of class by introducing surrogate keys for tuples, and by allowing attributes to be tuple-valued. Second, classes can be organized in a generalization taxonomy, whereby more specialized classes will automatically inherit attributes of more general classes. The taxonomy is established by declaring any number of immediate superclasses for each class (LDM supports so-called multiple inheritance).
- A data manipulation sublanguage (DML) is used to describe how the data is used. Data access requests are expressed in the query language component of the DML, while data update requests are expressed in another transaction language component. The query language is a SQL-like language that has been generalized for access to classes. The transaction language allows the user to specify simple combinations of update operations on the database. One operation allows the user to change the identity of an object (effectively changing its type).
- A data statistics language (DSL) is used to specify statistical information about a database. Using the DSL, a user can supply estimates of the number of objects in a class, how often a query or transaction is invoked, the relative cost of space and time, and so on. The statistical information is used by the component of the compiler responsible for performance prediction.
- A storage definition language (SDL) can be used to override some of the decisions made by the compiler on internal encoding of data. In particular, a user can specify a selection of the indices to be maintained in order to support searching within the database, and a selection of storage managers for managing the space used by objects.
Output from the first compiler is access and representation code in a language called PDM (for Physical Data Model). PDM code can then be input to the second compiler along with applications written in an extended C language called C/DB. C/DB has additional language constructs that permit the direct use of data access specifications originally written in LDM. The result of this second compiler is pure C code that can then be compiled with a standard C compiler.
Any number of C/DB source files can access and update a common database by including the same PDM source file, and then by linking their object code files. Also, any number of different databases can be accessed by a single C/DB source file. Thus a rudimentary capability exists for so-called separate compilation; that is, an ability to simultaneously develop separate parts of a software system. A summary of overall dataflow for the tool set is given in Figure 1.
1.2. Example
Assume a company is using RDM to develop a software system called GraphLab, which will consist of a number of programs for manipulating graphs. The specification for one of the programs, to be called FindRV, is as follows:
**Functional Overview** — FindRV inputs a directed graph $G$ together with a distinguished start vertex $v$. Output is a list of all vertices in $G$ reachable from $v$.
**Input/Output Format** — Each vertex is labelled with a unique identifier consisting of a non-blank sequence of up to twenty characters. The first line of input is the label $l(v)$ of the start vertex $v$. Each remaining line of input consists of a pair of identifiers "Id$_1$ Id$_2$" representing a new arc $u \rightarrow v$ in $G$ where $l(u)$=Id$_1$ and $l(v)$=Id$_2$. No two lines have the same pair of identifiers. Output is a list of identifiers on separate lines.
**Performance Requirements** — No limitations should exist on the size of $G$, beyond those imposed by the size of main memory. Running time should be at most $O(|A| \log |A|)$ where $|A|$ is the number arcs in $G$.
Also assume that architectural design for FindRV has resulted in the selection of a marking algorithm for finding the reachable vertices. The following is a description of the algorithm by Barstow [1]:
**Algorithmic Details** — Mark the start vertex $v$ as a boundary vertex and mark the rest of the vertices in $G$ as unexplored. If there are any vertices marked as boundary vertices, select one, mark it as explored, and mark each of its unexplored successors as a boundary vertex. Repeat until there are no more boundary vertices. The set of vertices marked as explored is the desired set of reachable vertices.
The LDM specifications and C/DB source code that implement \textit{FindRV} are given in Figures 2 and 3 respectively. In the former case, keywords are given in a boldface type. Also, all lines have been numbered to help with their referral in the following discussion.
Almost all the LDM specifications fall in one of the four categories of information outlined above. The only exception is the \texttt{schema} statement in line 1, which serves to name the specifications that follow. (There is further significance to the statement that will become clear during our discussion of the C/DB source.) The DDL source, lines 5 to 10, indicates that vertices and arcs in $G$ are represented as instances of the classes \texttt{Vertex} and \texttt{Arc} respectively. Each vertex will have two property values encoding its label and mark (the latter indicating its \textit{explored} status), and each arc will also have two property values encoding its source and destination vertices. The constraints (lines 6 and 13) imply that each vertex has a unique label, and that no two arcs have the same source and destination vertices.
\begin{figure}[h]
\begin{verbatim}
1. schema FindRV
2. % DDL Specification
3. class Vertex properties Label, Mark
4. constraints Id determined by Label
5. property Label on String maxlen 20
6. class Mark
7. class Arc properties FromVertex, ToVertex
8. constraints Id determined by FromVertex, ToVertex
9. property FromVertex on Vertex
10. property ToVertex on Vertex
11. % DML Specification
12. query VerticesWithMark given M from Mark
13. select V from Vertex where V.Marked = M
14. query VertexWithMark given M from Mark
15. select one V from Vertex where V.Marked = M
16. query VertexWithLabel given L from Label
17. select one V from Vertex where V.Label = L
18. query ConnectedVertices
19. given VFrom, M from Vertex, Mark
20. select VTo from Vertex
21. where V.To.Mark = M and
22. Arc {VFrom as FromVertex, VTo as ToVertex}
23. transaction NewMark
24. declare M from Mark
25. Insert M
26. return M
27. transaction ChgMark given V, M from Vertex, Mark
28. V.Marked := M
29. transaction NewVert given L, M from Label, Mark
30. declare V from Vertex
31. Insert V (V.Label := L; V.Marked := M)
32. return V
33. transaction NewArc
34. given VFrom, VTo from Vertex, Vertex
35. declare A from Arc
36. Insert A (A.FromVertex := VFrom; A.ToVertex := VTo)
37. % DSL Specification
38. size Vertex 100
39. size Mark 3
40. size Arc 200
41. Index VertexList on Vertex
42. of type distributed list on Mark
43. Index VertexTree on Vertex
44. of type binary tree ordered by Label asc
45. Index ArcList on Arc
46. of type distributed list on FromVertex
47. store VertexStore of type dynamic storing Vertex
48. store ArcStore of type dynamic storing Arc
49. store MarkStore of type static 3 storing Mark
\end{verbatim}
\caption{LDM Source for \textit{FindRV}}
\end{figure}
The DML source lists specifications for four queries and four transactions, which altogether constitute the database access and update requirements of \textit{FindRV}. Note that each query and transaction is named. An outline of their definitions follows.
\textbf{VerticesWithMark}
(lines 20 and 21) The first query accepts an instance of the Mark class as input, and returns all vertices with this instance as the value of their Mark property.
\textbf{VertexWithMark}
(lines 23 and 24) This query also accepts an instance the Mark class as input, but in this case returns an arbitrarily chosen vertex satisfying the same constraint on the value of its Mark property (if one exists).
\textbf{VertexWithLabel}
(lines 26 and 27) The query returns a vertex with a given label value if such a vertex exists. Note that no more than one such vertex ever exists since Label is a key property of the class Vertex — as specified in the data definition.
\textbf{ConnectedVertices}
(lines 29 to 33) The last query accepts two things as input: a source vertex \textsc{VFrom}, and an instance \textsc{M} of the Mark class. The query returns all vertices \textsc{VTo} that satisfy two constraints. First, each must have \textsc{M} as the value of its Mark property. And second, there must exist an arc with \textsc{VFrom} as the value of its \textsc{FromVertex} property and \textsc{VTo} as the value of its \textsc{ToVertex} property.
\textbf{NewMark}
(lines 35 to 38) The first of the transactions creates and returns a new instance of the Mark class.
\textbf{ChgMark}
(lines 40 and 41) Transaction ChgMark changes the value of the Mark property for an input vertex to an input mark.
\textbf{NewVertex}
(lines 43 to 46) Transaction NewVertex creates a new vertex object, initializes the values of its Mark and Label properties, and then returns the new vertex.
\textbf{NewArc}
(lines 48 to 51) Transaction NewArc creates a new arc object and initializes its property values to the input vertices.
The DSL source (lines 55 to 57) are estimates of the expected number of instances of each class for a typical invocation of \textit{FindRV}. This information is of particular use when the compiler invokes the query optimizer on the query ConnectedVertices. At some point in optimization, a choice must be make between an evaluation strategy that involves scanning all vertices with a given mark value, and an evaluation strategy that involves scanning all vertices connected by an arc to another vertex. The statistics indirectly establish that the latter strategy is to be preferred, since 200/100 vertices will be estimated to qualify in comparison to 100/3 vertices with the former strategy.
The SDL source (lines 61 to 72) specifies a selection of three indices and three storage managers. The first index on the Vertex class can be used for finding all vertices, or for finding all vertices with a given mark as the value of their Mark property. The second index, also on the Vertex class, can be used like the first for finding all vertices, or for finding a vertex with a
given string as the value of its Label property. The third index on the class Arc can be used for finding all arcs, or for finding all arcs with a given vertex as the value of their FromVertex property. Note that any number of indices can be declared for a class. In this case, two are specified for the Vertex class, one for the Arc class and none for the Mark class.
The first two storage managers manage storage for vertex and arc objects respectively. Since they are declared to be dynamic, no limits will exist on the number of such objects, beyond those implied by the amount of main memory. The third storage manager is declared to be static with enough room for at most three mark objects — exactly the number needed by the algorithm for computing reachable vertices. One advantage of static storage for a class is that it permits a more compact encoding of values for properties defined on the class. In this case, encoding values for the Mark property of vertex objects will require at most two bits.
The FindRV program is implemented as a single main function in Figure 3. The schema statement in line 3 essentially serves as a place holder for the global type and data declarations chosen by the LDM compiler as the means of encoding a database. Any number of different databases can be manipulated by including a schema statement for each. Also, any number of C/DB source files can access the same database by prefixing the keyword extern to the keyword schema in all but one of the files. (An extern schema is eventually replaced by global type declarations only.)
Lines 7 and 8 in the function body declare a number of variables for referring to instances of the Vertex and Mark classes. Line 9 declares three variables for referring to the built-in String class. This follows since the property name following the prop keyword (i.e. Label) is string-valued.
Lines 11 to 13 create three instances of the Mark class, and bind values for their surrogate keys to the three mark variables. The remainder of the body consists of three parts: lines 17 to 29 for inputting the graph, lines 33 to 37 for computing reachable vertices according to the above algorithm, and line 41 for producing a list of the labels of all reachable vertices. We conclude our discussion of the example with a few comments that should suffice to clarify the C/DB source.
- An invocation of an LDM transaction that returns a value occurs in lines 11, 12, 13, 18, 23 and 26. An invocation of an LDM transaction that does not return a value occurs in lines 28, 35 and 36. The values returned are the surrogate keys for newly created objects.
- An invocation of an LDM query that returns at most a single result occurs in lines 22, 25 and 33 with new forms of C if and while statements. A new form of C for statement is used to invoke queries returning a set of values in lines 36 and 41.
- A new @ operator is used in line 41 as the means of property value access for property variables. In this case, the expression V(Label) denotes the value of the Label property for the object having V bound to its surrogate key value.
1.3. Limitations
There are several limitations with the data model, and with the capabilities of the present version of the C/DB compiler. With respect to the data model, LDM presently assumes complete knowledge of the database. For example, all values for object properties are assumed to be known — no null-unknown values are permitted.
```c
#include <stdio.h>
#include <string.h>
main()
{
prop Vertex VStart, VFrom, VTo, V;
prop Mark Unexplored, Boundary, Explored;
prop Label VStartLabel, VFromLabel, VToLabel;
Unexplored = NewMark();
Boundary = NewMark();
Explored = NewMark();
/* input the graph */
scanf("%s", VStartLabel);
VStart = NewVertex(VStartLabel, Boundary);
while (scanf("%s %s", VFromLabel, VToLabel) != EOF)
{
if V in VertexWithLabel(VFromLabel) VFrom = V;
else VFrom = NewVertex(VFromLabel, Unexplored);
if V in VertexWithLabel(VToLabel) VTo = V;
else VTo = NewVertex(VToLabel, Unexplored);
NewArc(VFrom, VTo);
}
/* find all reachable vertices */
while V in VertexWithMark(Boundary)
{
ChgMark(V, Explored);
for VTo in ConnectVertices(V, Unexplored) ChgMark(VTo, Boundary);
}
/* print the reachable vertices */
for V in VerticesWithMark(Explored) printf("%s\n", V@Label);
}
```
**Figure 3. C/DB Source for FindRV**
Both compilers assume DML specifications include only trusted transactions. This means that neither compiler will generate code for consistency checking of either inherent or explicit constraints. (However, it is still very worthwhile for a user to declare constraints in the DDL. The constraints are used extensively by the query optimizer.) The tool set does not at present have any built-in support for managing concurrency. It is incumbent on the user to manage concurrent database access by multiple processes.
The current C/DB compiler supports only the list and distributed list index types, and only the dynamic store type. As a consequence, any query with an order by clause is currently not supported. In the full LDM transaction language, the identity of an object can be changed with an assignment of the form
<VarName>.Id := <Term>
This form of assignment is also not currently supported by the existing C/DB compiler.
2. THE LDM LANGUAGE
This part of the manual is an informal definition of LDM. We begin with the data definition language (DDL), and then define the query and transaction languages which comprise the data manipulation language (DML). The final two parts of this section define the data statistics language (DSL) available for expressing statistical estimates and cost model parameters for a database, and the storage definition language (SDL) for specifying a selection of indices and storage managers for data objects.
Throughout this section, examples will refer to a hypothetical software system that manages information about students, teachers and courses at some university. An enterprise view of the relevant data is illustrated by the entity-relationship diagram in Figure 4. Two features of the diagram are worth noting. First, the diagram has existence constraints for Course and GradStudent objects with respect to TaughtBy and Supervisor relationships respectively. This implies in the former case, for example, that each course object is always TaughtBy related to one and only one Teacher object. And second, the diagram suggests in several places that entity types can be declared as subtypes of other entity types (by using Isa triangles). For example, the Isa link between Student and GradStudent implies that some Student objects may also be GradStudent objects. The Isa link between Person, Student and Teacher implies three things: that some Person objects may also be Student objects, that some Person objects may also be Teacher objects, and that no object is just a Person. This latter condition is a consequence of there existing more than one incoming arc to the Isa link. If this is not desired, then two separate Isa links can be used.
Language syntax will be specified using BNF, with the following additional conventions: square brackets "[..]" are used to indicate optional arguments, braces "{..,<marker>}" to indicate options that may be repeated one or more times, separated by <marker> (either a blank, comma or semicolon), and keywords are indicated in a boldface type.
2.1. LDM Program Format
All LDM programs have the following form
```<LDMProgram> ::=<schema> <SchemaName> {<DDLSpec>,""} {<NonDDLSpec>,""}
<DDLSpec> ::=<ClassDefn> | <PropertyDefn>
<NonDDLSpec> ::=<DMLSpec> | <DSLSpec> | <SDLSpec>
<DMLSpec> ::=<QueryDefn> | <TransactionDefn>
```
where <SchemaName> is an identifier naming the schema. Note that all DDL specifications must precede any other specifications. The LDM source for the University database begins with
Figure 4. ER Diagram of a University Schema
**schema** University
...
2.2. **Data Definition**
Data is described by specifying a number of *classes* and *properties*.
\[
<\text{ClassDefn}> ::= \\
\quad \text{class} <\text{ClassName}> [\text{isa} \{<\text{ClassName}>,",","\}] \\
\quad [\text{properties} \{<\text{PropertyName}>,","\}] \\
\quad [\text{constraints} \{<\text{Constraint}>,","\}] \\
\]
\[
<\text{PropertyDefn}> ::= \\
\quad \text{property} <\text{PropertyName}> \text{ on } <\text{ClassName}> \mid \\
\quad \text{property} <\text{PropertyName}> \text{ on String maxlen } <\text{Integer}> \mid \\
\quad \text{property} <\text{PropertyName}> \text{ on Integer range } <\text{Integer}> \text{ to } <\text{Integer}> \mid \\
\quad \text{property} <\text{PropertyName}> \text{ on Real} \mid \\
\quad \text{property} <\text{PropertyName}> \text{ on DoubleReal} \\
\]
Note that properties are declared separately in LDM, and that all definitions may occur in any order. For a given schema, no two classes can have the same name, and no two properties can
have the same name. As a convenience, a property definition of the form
```
property C on C
```
is assumed for each class C, whenever such a property is not already declared by the user.
There are two kinds of constraints that may be specified when defining a class: path functional dependencies (PFDs), and covers.
```
<Constraint> ::=
<PathFunction> determined by \{<PathFunction>,","\} |
cover by \{<ClassName>,","\}
```
The meaning of PFD and cover constraints will be explained by example below. For the university application, class and property definitions corresponding to the above E-R diagram are as follows.
```
class Person
properties Name, Age
constraints
Id determined by Name
cover by Student, Teacher
class Student isa Person
class Teacher isa Person
constraints cover by GradStudent, Professor
class Professor isa Teacher
class GradStudent isa Student, Teacher
properties Supervisor
class Course
properties Name, TaughtBy
constraints Id determined by Name
class EnrolledIn
properties Student, Course, Grade
constraints Id determined by Student, Course
property Name on String maxlen 20
property Age on Integer range 16 to 75
property Supervisor on Professor
property TaughtBy on Teacher
```
The PFD and cover constraints for the Person class assert that no two Person objects have the same value for their Name property, and that each Person object must also be a Student or Teacher object (or both). For a more thorough discussion of PFD constraints, see [3,4].
2.3. Data Manipulation — The Query Language
We start by giving the full grammar for the query language, and then explain its constructs by giving example queries on the University schema. In general, syntax for both queries and transactions has been chosen so as to resemble the SQL query language wherever possible.
A query has the general form
\[
\text{<QueryDefn>} ::= \\
\text{query} \text{<QueryName>} \text{[given} \text{<VarDecl>]} \\
\text{select [one]} \text{[<VarDecl>]} \\
\text{where} \text{<Predicate>]} \\
\text{order by} \{\text{<OrderItem>,","} \} \\
\text{precomputed} \\
\]
\[
\text{<OrderItem>} ::= \text{<Term>} \text{asc} | \text{<Term>} \text{desc} \\
\]
\[
\text{<VarDecl>} ::= \{\text{<VarName>,","} \text{from} \{\text{<PropertyName>,","} \}
\]
Note that each \text{<PropertyName>} must associate one-to-one with each \text{<VarName>}. A declaration of the form
\[
\text{V1, V2 from P1, P2}
\]
defines two variables V1 and V2 that may have any values that are also legal for properties P1 and P2 respectively. Terms and predicates are defined as follows.
\[
\text{<Term>} ::= \\
\text{<Integer>} | \\
\text{<Real>} | \\
"\text{<String>}" | \\
["," \text{<VarName> } ["," \text{<PathFunction>]} | \\
\text{<Term>} \text{<ArithmeticOperator>} \text{<Term>} | \\
(\text{<Term>})
\]
\[
\text{<PathFunction>} ::= \text{Id} | \{\text{<PropertyName>,","} \}
\]
\[
\text{<ArithmeticOperator>} ::= + | - | * | / | \text{mod}
\]
\[
\text{<Predicate>} ::= \\
\text{<Term>} \text{<ComparisonOperator>} \text{<Term>} | \\
\text{<VarName>} \text{has} \text{<MaxOrMin>} \text{<PathFunction>[where} \text{<Predicate>]} | \\
\text{<ClassName>} "\{" \text{<Term>} \text{as} \text{<PathFunction>,","} \}" | \\
\text{not} \text{<Predicate>} | \\
\text{exist} \text{<VarDecl> [where} \text{<Predicate>]} | \\
\text{for all} \text{<VarDecl> <Predicate>} | \\
\text{<Predicate>} \text{<LogicalOperator>} \text{<Predicate>} | \\
(\text{<Predicate>})
\]
<ComparisonOperator> ::= \( \leq \) | \(<\) | \(\leq\) | \(>\) | \(\leq\) | <>
<MaxOrMin> ::= \text{max} \text{ | min}
<LogicalOperator> ::= \text{implies} \text{ | and} \text{ | or}
The standard precedence for operators in terms and predicates is assumed. For term operators this order of precedence is: binary addition "+" and subtraction "-" (weakest binding); multiplication "*", division "/" and modulus \text{mod}; unary minus "-"; and finally property value access "." (strongest binding). For predicate operators the order is: existential quantification \text{exist} and universal quantification \text{for all} (weakest binding); implication \text{implies}; disjunction \text{or}; conjunction \text{and}; negation \text{not}; and finally the arithmetic comparison operators and the two special forms for expressing maximum and minimum value criteria \text{has max} and \text{has min}, and for expressing \text{atomic predicate} conditions "<ClassName>{\cdots}" (strongest binding). Here are some example queries on the University schema.
**Ex 1.** (getting all objects in a class extension) Retrieve all people objects.
```
query
People
select P
from Person
```
**Ex 2.** (specifying conditions and query parameters) Retrieve all student objects older than 30 that are enrolled in a given course.
```
query
OldStudentsInCourse given C from Course
select S from Student
where S.Age > 30 \text{ and EnrolledIn} \{ S \text{ as Student}, C \text{ as Course}\}
```
This query can also be specified as follows.
```
query
OldStudentsInCourse given C from Course
select S from Student
where S.Age > 30 \text{ and}
exist E from EnrolledIn
where E.Student = S \text{ and E.Course = C}
```
**Ex 3.** (subqueries) Retrieve all integers that occur as the age value of some student.
```
query
StudentAges
select A from Age
where exist S from Student where S.Age = A
```
**Ex 4.** (use of path functions and nondeterminism) Retrieve a graduate student object that is supervised by some professor with a given name.
```
query
GradWithSupervisorName given N from Name
select one G from GradStudent where G.Supervisor.Name = N
```
**Ex 5.** (sorted retrieval) Retrieve all graduate student objects in major order by their supervisor's name, and minor order by their own name.
```
query Graduates
select G from GradStudent
order by G.Supervisor.Name asc, G.Name asc
Ex 6. (use of max and min) Retrieve all undergraduate objects who received the highest grade in some course.
query SmartUndergrads
select S from Student
where not GradStudent {S as Id} and (
exist C, E from Course, EnrolledIn
where E.Student = S and
E has max Grade where E.Course = C)
The query can also be specified in either of the following two ways.
query SmartUndergrads
select S from Student
where not (exist G from GradStudent where G = S) and (
exist C, E from Course, EnrolledIn
where E.Student = S and
E.Course = C and (
for all E1 from EnrolledIn (
E1.Course = C implies E1.Grade <= E.Grade))))
query SmartUndergrads
select S from Student
where not (exist G from GradStudent where G = S) and (
exist C, E from Course, EnrolledIn
where E.Student = S and
E.Course = C and not (
exist E1 from EnrolledIn
where E1.Course = C and E1.Grade > E.Grade))
Ex 7. (complex queries) Retrieve an undergraduate object who received a grade in a course higher than any graduate student also enrolled in the course.
query PossibleGrad
select one S from Student
where not GradStudent {S as Id} and (
exist E1 from EnrolledIn
where E1.Course = S and (
for all E2 from EnrolledIn (
(E2.Course = E1.Course and GradStudent {E2.Student as Id})
implies E2.Grade < E1.Grade))
Ex 8. (forcing projections) Retrieve and temporarily store all student objects enrolled in a course taught by a teacher with a given name.
```
query StudentsTaughtByTeacher given N from Name
select S from Students
where EnrolledIn {S as Student, N as Course.TaughtBy.Name}
precomputed
When specified, a precomputed clause will force the results of a query to be precomputed and temporarily stored before any action on each result is permitted. This may be necessary if the action intended for a result can invoke a transaction that interferes with a query evaluation strategy (referred to as the Halloween problem). An example with the above is a transaction that deletes each student object in the returned result.
2.4. Data Manipulation — The Transaction Language
Again, we start by giving the full grammar for the transaction language, and then illustrate its use with example transactions for the University schema.
<TransactionDefn> ::=
transaction <TransactionName> [given <VarDecl>]
[declare <VarDecl>]
{<Statement>,","}
[return <Term>]
<Statement> ::=
insert {<VarName>,","} ["{" {<InitStatement>",""} "] |
delete {<VarName>,","} |
<Term> "$="$ <Term>
<InitStatement> ::= <VarName> "." <PropertyName> "$="$ <Term>
Here are some example transactions for the University schema.
Ex 9. (updating property values) Change the teacher assigned to a given course object to another given teacher.
transaction AssignTeacher given T, C from Teacher, Course
C.TaughtBy := T
Ex 10. (creating new objects) Enroll a given student object in given course object.
transaction EnrollStudent given S, C from Student, Course
declare E from EnrolledIn
insert E (E.Student := S; E.Course := C)
Ex 11. (creating and returning objects) Create and return a new course object with a given name and teacher.
transaction NewCourse given T, N from Teacher, Name
declare C from Course
insert C (C.TaughtBy := T; C.Name := N)
return C
Ex 12. (deleting an object from the database) Delete a given student object from the database.
transaction RemStudent given S from Student
delete S
Ex 13. (changing the type of an object) Enter a given student object in graduate school, assigning a professor object as an initial supervisor.
transaction BecomeGrad given S, P from Student, Professor
declare G from GradStudent
insert G (G.Name := S.Name; G.Age := S.Age; G.Supervisor := P);
G.Id := S
return P
Note in this last example that the assignment statement "G.Id := S" will cause the identity of the newly created G object to be changed to the identity of the S object. As a consequence, the S object is deleted from the database, and any previously existing references to S will now be to G.
2.5. Data Statistics
At present, a single form of statistic can be specified for classes.
<DLSpec> ::= size <ClassName> <Integer>
A size statistic for a class corresponds to an estimate of the expected number of objects in the class that are not also in any subclasses. For example, assume size estimates for the University database have been specified as follows.
size Student 500
size GradStudent 100
size Course 200
size EnrolledIn 4000
size Professor 50
The statistics imply that one can expect a total of 600 student objects: 100 that are GradStudent objects, and 500 that are not. Note that size estimates for classes having one or more cover constraints are therefore nonsensical.
2.6. Storage Definition — Store Management
In LDM, each class must be associated with a store manager from which space is allocated when objects are created for the class, and to which space is released when objects that were created for that class are deleted. The user is currently responsible for declaring store managers using the following language.
<DLSpec> ::= store <StoreName> of type <StoreType>
storing {<ClassName>,","}
<StoreType> ::= dynamic | static <Integer>
There are two types of store managers that may declared: dynamic store, and static store. A
class associated with a dynamic store manager will have no limit on the number of objects that may be created, beyond limits imposed by the available memory. This is not true of static store managers; the total number of objects that may be created for all associated classes is limited by the static store manager's size. However, the advantage in this case is that encoding property values for properties defined on any of the associated classes will usually require much less space. To ensure such an encoding is possible, a constraint on a static store specification is that the set of associated classes must satisfy the condition that all subclasses of any element of the set are also in the set.
Storage management for the University database might be specified as follows.
store PersonStore of type dynamic storing Student, GradStudent
store ProfStore of type static 60 storing Professor
store EnrollStore of type dynamic storing EnrolledIn
store CourseStore of type dynamic storing Course
Note that store for all Student and GradStudent objects are managed in a common pool. Also note that the specification of ProfStore implies that no more than 60 professor objects will exist at one time. This permits a more compact encoding of Supervisor property values: only 6 bits of store are required for each, in comparison to the number of bits necessary to encode pointer values.
Free space managed by store managers satisfies two conditions. First, all blocks of memory associated with a particular manager are the same size. This implies internal fragmentation (or memory loss) if more that one class is associated with the manager, since smaller objects are still allocated enough space for the largest possible object. And second, space once allocated to a given store manager becomes unavailable for use by any other store manager. This can cause external fragmentation, for example, in a case where a large number of objects for one class are created, then deleted, and then a large number of objects for another class associated with a different store manager are created. The specification of store managers therefore requires balancing possible internal and external memory fragmentation, and the need for data compaction.
2.7. Storage Definition — Indices
Access to class extensions is achieved by declaring a number of indices, which at present is also the responsibility of the user. Each index is associated with a unique class and is declared to be of a particular type. For example, in the University database, a linked list of person objects can be declared with the form
index PersonList on Person of type list
The index, called PersonList, establishes the existence of a doubly linked list of all person objects at run-time. Note that the list will include all objects created in any subclasses of Person, such as Student objects, Teacher objects, and so on.
Any number of indices (including none) may be declared for a given class. The language for specifying an index is as follows.
<SDL:Spec> ::=
index <IndexName> on <ClassName>
of type <IndexType>
<IndexType> ::=
list |
array <Integer> ordered by {<SearchCond>,","} |
binary tree ordered by {<SearchCond>,","} |
distributed list on <PathFunction> |
distributed binary tree on <PathFunction> ordered by {<SearchCond>,","}
<SearchCond> ::= <PathFunction> asc | <PathFunction> desc | <ClassName>
As the language suggests, there are currently five types of index that may be declared. The list and binary tree types result in two additional pointer values for each object, which encode a doubly linked list in the first case, and a tree in the second. An array index is a static index corresponding to a FORTRAN-like fixed sized array of object identifiers. In this case, a binary search is used to find entries that satisfy given search conditions. The distributed list and distributed tree indices require the user to specify an additional path function, which must also satisfy the constraint that its range class is user-defined. The two distributed types behave like their undistributed counterparts when distributed among the objects in the range class of this path function.
There are three kinds of ordered search conditions that may be specified for an array, a binary tree or a distributed binary tree index type. The first has the form "<PathFunction> asc", and represents an ordering in which index entries occur in ascending order of their value for <PathFunction>. An ascending order for integer values, real values and string values has the obvious interpretation. An ascending order for all other kinds of objects is defined internally (and therefore legal), but is not meaningful to a user. The second kind of ordered search condition has the form "<PathFunction> desc", and represents an ordering in which index entries occur in descending order of their value for <PathFunction>. The third kind of ordered search condition has the form "<ClassName>", and is referred to as a subclass sort. A subclass sort on class C is two-valued: zero if an object in the index is not also in class C, and one otherwise.
Examples of indices that might be declared for the University database are as follows.
index PersonTree on Person of type binary tree
ordered by Student, GradStudent, Supervisor.Name asc
index TeacherTree on Teacher of type binary tree
ordered by Name asc
index EDistList1 on EnrolledIn of type distributed list on Course
index EDistList2 on EnrolledIn of type distributed list on Student
index CDistList on Course of type distributed list on TaughtBy
Five indices are declared, of which two are binary trees and three are distributed lists. The first index, called PersonTree, illustrates the use of subclass sort conditions. For example, the first
subclass sort on Student is zero-valued for Person objects that are not also Student objects, and one-valued otherwise. The query optimizer will choose index PersonTree as the best possible means of evaluating any of the following four queries.
\begin{verbatim}
query Q1 select P from Person
query Q2 select S from Student
query Q3 select G from GradStudent
query Q4 select G from GradStudent where G.Supervisor.Name = "Fred"
\end{verbatim}
For a more complete discussion of memory-resident indices, see [5].
3. THE C/DB LANGUAGE
In order to access a database, the C language has been extended to include a number of additional constructs with the following purposes:
- declaring access to a schema,
- declaring object-valued variables,
- accessing the value of an object property, and
- invoking queries.
This extended language is called C/DB. Note that no extensions to syntax were needed to support invoking LDM transactions. These eventually become separate C functions, and are invoked in the same way as any other C functions.
Our discussion of C/DB will center on defining the extensions to the C grammar given in [2]. The syntax notation we use adheres to the notation adopted in the reference (and therefore differs from the conventions used in the previous section). In particular, syntactic categories (non-terminals) are indicated by italic type, and keywords in bold type. An optional keyword is indicated by subscripting with opt. The necessary extensions are straightforward, and the reader is encouraged to reexamine the C/DB source in Figure 3 (in the first section) for examples of their use.
Access to an LDM schema is accomplished with the use of an additional form of data-definition in an external-definition for a program.
\begin{verbatim}
data-definition:
...
extern\textit{opt} schema \textbf{identifier} ;
\end{verbatim}
Subsequent access to properties, classes, queries and transactions defined in the LDM schema with the name \textit{identifier} is then enabled. The \textbf{extern} modifier may be used if more than one program accesses the same LDM schema.
Object-valued variables are declared with a new form of type-specifier.
\begin{verbatim}
type-specifier:
...
prop \textbf{identifier}
\end{verbatim}
The \textbf{identifier} must correspond to the name of an LDM property. Translation of this form of \textit{type-specifier} by the C/DB compiler depends on the declaration of the LDM property itself. A property declared on a user defined class
property <PropertyName> on <ClassName>
translates to a variety of special forms (e.g. pointer types), which cannot be assumed by the user. A property declared on the built-in String, Integer, Real or DoubleReal classes translates in the obvious manner to other forms of type-specifiers, such as short, int, long, float, double, and (array of) char.
For variables referring to objects in a user defined class, an additional form of primary is available for accessing property values.
primary:
...
primary @ identifier
Note that primary must be an expression with a type "prop P" for which identifier is a legal property of the class on which P is defined.
New forms of if, while and for statements have been added to support the invocation of queries.
statement:
...
if query-call statement
if query-call statement else statement
while query-call statement
for query-call statement
query-call:
identifier_1
identifier_1 ( expression-list )
identifier-list in identifier_1
identifier-list in identifier_1 ( expression-list )
identifier-list:
identifier_2
identifier_2, identifier-list
In a query-call, identifier_1 is the name of the query, expression-list a sequence of argument expressions that are bound in sequence to the given variables of the named query (if any), and identifier-list a sequence of identifier_2 which are bound in sequence to the select variables of the named query (if any).
The named query in if and while statements must have the form
query ... select one ...
in which a single solution to the query is non-deterministically selected. If such a solution exists, then the first form of if binds each identifier_2 in identifier-list to the select values of the solution, and evaluates the argument statement. The second form of if operates similarly, except that the second argument statement is evaluated if no solution to the query is found. If such a solution exists in the case of a while, then each identifier_2 in identifier-list is bound to the select values of the solution, the argument statement is evaluated, and then this process is repeated.
The new form of for may take any query as an argument. In this case, the argument statement is evaluated for each query solution. The order in which solutions are considered will satisfy the "order by" clause of a query (if specified).
Finally, all forms of query-call must satisfy some typing conditions on expression-list and identifier-list. To illustrate, consider an LDM query of the form
query Q
given $V_{1,1}, \ldots, V_{1,m}$ from $P_{1,1}, \ldots, P_{1,m}$
select $V_{2,1}, \ldots, V_{2,n}$ from $P_{2,1}, \ldots, P_{2,n}$
where
...
together with a C/DB for statement of the form
for $V_{3,1}, \ldots, V_{3,n}$ in Q ($Exp_1, \ldots, Exp_m$) ...
Also assume the type of variable $V_{3,i}$ and expression $Exp_j$ is "prop $P_{3,i}$" and "prop $P_{4,j}$" respectively. The typing conditions are as follows.
(a) The class on which property $P_{4,j}$ is defined must be a subclass of the class on which property $P_{3,i}$ is defined, $1 \leq j \leq m$.
(b) The class on which property $P_{3,i}$ is defined must be a subclass of the class on which property $P_{2,i}$ is defined, $1 \leq i \leq n$.
For built-in classes, such as Integer, the subclassing constraints correspond to assignment compatibility.
4. References
5. G. E. Weddell, Selection of indices to memory-resident entities for semantic data models, to appear in IEEE Transactions on Knowledge and Data Engineering, (June 1989).
|
{"Source-Url": "https://cs.uwaterloo.ca/research/tr/1989/CS-89-41.pdf", "len_cl100k_base": 10513, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 24003, "total-output-tokens": 12003, "length": "2e13", "weborganizer": {"__label__adult": 0.0002956390380859375, "__label__art_design": 0.00025010108947753906, "__label__crime_law": 0.0002409219741821289, "__label__education_jobs": 0.00246429443359375, "__label__entertainment": 5.942583084106445e-05, "__label__fashion_beauty": 0.00012886524200439453, "__label__finance_business": 0.0002366304397583008, "__label__food_dining": 0.0002887248992919922, "__label__games": 0.0004112720489501953, "__label__hardware": 0.000835418701171875, "__label__health": 0.0002696514129638672, "__label__history": 0.00017523765563964844, "__label__home_hobbies": 8.666515350341797e-05, "__label__industrial": 0.0003364086151123047, "__label__literature": 0.0003044605255126953, "__label__politics": 0.00018680095672607425, "__label__religion": 0.0003917217254638672, "__label__science_tech": 0.0116424560546875, "__label__social_life": 0.00010120868682861328, "__label__software": 0.007457733154296875, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00014030933380126953, "__label__transportation": 0.0004210472106933594, "__label__travel": 0.0001544952392578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45898, 0.01728]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45898, 0.54844]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45898, 0.8664]], "google_gemma-3-12b-it_contains_pii": [[0, 145, false], [145, 2227, null], [2227, 5495, null], [5495, 7209, null], [7209, 10094, null], [10094, 13184, null], [13184, 16638, null], [16638, 18493, null], [18493, 21178, null], [21178, 22247, null], [22247, 23770, null], [23770, 25736, null], [25736, 28064, null], [28064, 29602, null], [29602, 31435, null], [31435, 33521, null], [33521, 36538, null], [36538, 39315, null], [39315, 41816, null], [41816, 43944, null], [43944, 45898, null]], "google_gemma-3-12b-it_is_public_document": [[0, 145, true], [145, 2227, null], [2227, 5495, null], [5495, 7209, null], [7209, 10094, null], [10094, 13184, null], [13184, 16638, null], [16638, 18493, null], [18493, 21178, null], [21178, 22247, null], [22247, 23770, null], [23770, 25736, null], [25736, 28064, null], [28064, 29602, null], [29602, 31435, null], [31435, 33521, null], [33521, 36538, null], [36538, 39315, null], [39315, 41816, null], [41816, 43944, null], [43944, 45898, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45898, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45898, null]], "pdf_page_numbers": [[0, 145, 1], [145, 2227, 2], [2227, 5495, 3], [5495, 7209, 4], [7209, 10094, 5], [10094, 13184, 6], [13184, 16638, 7], [16638, 18493, 8], [18493, 21178, 9], [21178, 22247, 10], [22247, 23770, 11], [23770, 25736, 12], [25736, 28064, 13], [28064, 29602, 14], [29602, 31435, 15], [31435, 33521, 16], [33521, 36538, 17], [36538, 39315, 18], [39315, 41816, 19], [41816, 43944, 20], [43944, 45898, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45898, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
33dd95f28d332171554c92e6da065431e1999359
|
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
Two classic sorting algorithms
Critical components in the world’s computational infrastructure.
- Full scientific understanding of their properties has enabled us to develop them into practical system sorts.
- Quicksort honored as one of top 10 algorithms of 20th century in science and engineering.
Mergesort.
- Java sort for objects.
- Perl, C++ stable sort, Python stable sort, Firefox JavaScript, ...
Quicksort.
- Java sort for primitive types.
- C qsort, Unix, Visual C++, Python, Matlab, Chrome JavaScript, ...
public static void quicksort(char[] items, int left, int right)
{
int i, j;
char x, y;
i = left; j = right;
x = items[(left + right) / 2];
do
{
while ((items[j] < x) && (i < right)) i++;
while ((x < items[i]) && (j > left)) j--;
if (i <= j)
{
y = items[j];
items[j] = items[i];
items[i] = y;
i++;
j--;
}
} while (i <= j);
if (left < j) quicksort(items, left, j);
if (i < right) quicksort(items, i, right);
}
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
# Quicksort
## Basic plan.
- **Shuffle** the array.
- **Partition** so that, for some $j$
- entry $a[j]$ is in place
- no larger entry to the left of $j$
- no smaller entry to the right of $j$
- **Sort** each piece recursively.
<table>
<thead>
<tr>
<th>input</th>
<th>Q U I C K S O R T E X A M P L E</th>
</tr>
</thead>
<tbody>
<tr>
<td>shuffle</td>
<td>K R A T E L E P U I M Q C X O S</td>
</tr>
<tr>
<td>partition</td>
<td>E C A I E K L P U T M Q R X O S</td>
</tr>
<tr>
<td>sort left</td>
<td>A C E E I K L P U T M Q R X O S</td>
</tr>
<tr>
<td>sort right</td>
<td>A C E E I K L M O P Q R S T U X</td>
</tr>
<tr>
<td>result</td>
<td>A C E E I K L M O P Q R S T U X</td>
</tr>
</tbody>
</table>
**SIR CHARLES ANTHONY RICHARD HOARE**
1980 Turing Award
Quick sort partitioning demo
Repeat until i and j pointers cross.
- Scan i from left to right so long as \(a[i] < a[lo]\).
- Scan j from right to left so long as \(a[j] > a[lo]\).
- Exchange \(a[i]\) with \(a[j]\).
Quicksort partitioning demo
Repeat until i and j pointers cross.
- Scan i from left to right so long as (a[i] < a[lo]).
- Scan j from right to left so long as (a[j] > a[lo]).
- Exchange a[i] with a[j].
When pointers cross.
- Exchange a[lo] with a[j].
partitioned!
Quicksort: Java code for partitioning
```java
private static int partition(Comparable[] a, int lo, int hi) {
int i = lo, j = hi+1;
while (true) {
while (less(a[++i], a[lo]))
if (i == hi) break;
while (less(a[lo], a[--j]))
if (j == lo) break;
if (i >= j) break;
exch(a, i, j);
}
exch(a, lo, j);
return j;
}
```
**Quicksort partitioning overview**
<table>
<thead>
<tr>
<th>before</th>
<th>during</th>
<th>after</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="before.png" alt="Diagram" /></td>
<td><img src="during.png" alt="Diagram" /></td>
<td><img src="after.png" alt="Diagram" /></td>
</tr>
</tbody>
</table>
Quicksort: Java implementation
```java
public class Quick {
private static int partition(Comparable[] a, int lo, int hi) {
/* see previous slide */
}
public static void sort(Comparable[] a) {
StdRandom.shuffle(a);
sort(a, 0, a.length - 1);
}
private static void sort(Comparable[] a, int lo, int hi) {
if (hi <= lo) return;
int j = partition(a, lo, hi);
sort(a, lo, j-1);
sort(a, j+1, hi);
}
}
```
# Quicksort trace
<table>
<thead>
<tr>
<th>lo</th>
<th>j</th>
<th>hi</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>6</td>
<td>15</td>
</tr>
<tr>
<td>7</td>
<td>9</td>
<td>15</td>
</tr>
<tr>
<td>7</td>
<td>7</td>
<td>8</td>
</tr>
<tr>
<td>8</td>
<td>8</td>
<td></td>
</tr>
<tr>
<td>10</td>
<td>13</td>
<td>15</td>
</tr>
<tr>
<td>10</td>
<td>12</td>
<td>12</td>
</tr>
<tr>
<td>10</td>
<td>11</td>
<td>11</td>
</tr>
<tr>
<td>10</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>14</td>
<td>14</td>
<td>15</td>
</tr>
<tr>
<td>15</td>
<td>15</td>
<td></td>
</tr>
</tbody>
</table>
Initial values
random shuffle
no partition for subarrays of size 1
result
ACEEIKLMOQRSTUVWXYZ
Quicksort trace (array contents after each partition)
Quicksort animation
50 random items
http://www.sorting-algorithms.com/quick-sort
- algorithm position
- in order
- current subarray
- not in order
Quicksort: implementation details
**Partitioning in-place.** Using an extra array makes partitioning easier (and stable), but is not worth the cost.
**Terminating the loop.** Testing whether the pointers cross is a bit trickier than it might seem.
**Staying in bounds.** The \((j == 10)\) test is redundant (why?), but the \((i == hi)\) test is not.
**Preserving randomness.** Shuffling is needed for performance guarantee.
**Equal keys.** When duplicates are present, it is (counter-intuitively) better to stop on keys equal to the partitioning item's key.
Quicksort: empirical analysis
Running time estimates:
- Home PC executes $10^8$ compares/second.
- Supercomputer executes $10^{12}$ compares/second.
<table>
<thead>
<tr>
<th>computer</th>
<th>thousand</th>
<th>million</th>
<th>billion</th>
<th>thousand</th>
<th>million</th>
<th>billion</th>
<th>thousand</th>
<th>million</th>
<th>billion</th>
</tr>
</thead>
<tbody>
<tr>
<td>home</td>
<td>instant</td>
<td>2.8 hours</td>
<td>317 years</td>
<td>instant</td>
<td>1 second</td>
<td>18 min</td>
<td>instant</td>
<td>0.6 sec</td>
<td>12 min</td>
</tr>
<tr>
<td>super</td>
<td>instant</td>
<td>1 second</td>
<td>1 week</td>
<td>instant</td>
<td>instant</td>
<td>instant</td>
<td>instant</td>
<td>instant</td>
<td>instant</td>
</tr>
</tbody>
</table>
Lesson 1. Good algorithms are better than supercomputers.
Lesson 2. Great algorithms are better than good ones.
Quicksort: best-case analysis
**Best case.** Number of compares is $\sim N \log N$.
<table>
<thead>
<tr>
<th>lo</th>
<th>j</th>
<th>hi</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
<th>14</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>initial values</td>
<td>H</td>
<td>A</td>
<td>C</td>
<td>B</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>D</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>random shuffle</td>
<td>H</td>
<td>A</td>
<td>C</td>
<td>B</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>D</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0 7 14</td>
<td>D</td>
<td>A</td>
<td>C</td>
<td>B</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0 3 6</td>
<td>B</td>
<td>A</td>
<td>C</td>
<td>D</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0 1 2</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0 0</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2 2</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>F</td>
<td>E</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>4 5 6</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>4 4</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>6 6</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>L</td>
<td>I</td>
<td>K</td>
<td>J</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>8 11 14</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>J</td>
<td>I</td>
<td>K</td>
<td>L</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>8 9 10</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>I</td>
<td>J</td>
<td>K</td>
<td>L</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>8 8</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>I</td>
<td>J</td>
<td>K</td>
<td>L</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>10 10</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>I</td>
<td>J</td>
<td>K</td>
<td>L</td>
<td>N</td>
<td>M</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>12 13 14</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>I</td>
<td>J</td>
<td>K</td>
<td>L</td>
<td>M</td>
<td>N</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>12 12</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>I</td>
<td>J</td>
<td>K</td>
<td>L</td>
<td>M</td>
<td>N</td>
<td>O</td>
<td></td>
<td></td>
</tr>
<tr>
<td>14 14</td>
<td>A</td>
<td>B</td>
<td>C</td>
<td>D</td>
<td>E</td>
<td>F</td>
<td>G</td>
<td>H</td>
<td>I</td>
<td>J</td>
<td>K</td>
<td>L</td>
<td>M</td>
<td>N</td>
<td>O</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
A B C D E F G H I J K L M N O
Worst case. Number of compares is $\sim \frac{1}{2} N^2$.
Proposition. The average number of compares $C_N$ to quicksort an array of $N$ distinct keys is $\sim 2N \ln N$ (and the number of exchanges is $\sim \frac{1}{3} N \ln N$).
Pf. $C_N$ satisfies the recurrence $C_0 = C_1 = 0$ and for $N \geq 2$:
\[
C_N = (N + 1) + \left( \frac{C_0 + C_{N-1}}{N} \right) + \left( \frac{C_1 + C_{N-2}}{N} \right) + \ldots + \left( \frac{C_{N-1} + C_0}{N} \right)
\]
- Multiply both sides by $N$ and collect terms:
\[
NC_N = N(N + 1) + 2(C_0 + C_1 + \ldots + C_{N-1})
\]
- Subtract this from the same equation for $N - 1$:
\[
NC_N - (N - 1)C_{N-1} = 2N + 2C_{N-1}
\]
- Rearrange terms and divide by $N(N + 1)$:
\[
\frac{C_N}{N + 1} = \frac{C_{N-1}}{N} + \frac{2}{N + 1}
\]
QuickSort: average-case analysis
- Repeatedly apply above equation:
\[
\frac{C_N}{N+1} = \frac{C_{N-1}}{N} + \frac{2}{N+1} \\
= \frac{C_{N-2}}{N-1} + \frac{2}{N} + \frac{2}{N+1} \\
= \frac{C_{N-3}}{N-2} + \frac{2}{N-1} + \frac{2}{N} + \frac{2}{N+1} \\
= \frac{2}{3} + \frac{2}{4} + \frac{2}{5} + \ldots + \frac{2}{N+1}
\]
- Approximate sum by an integral:
\[
C_N = 2(N+1) \left( \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \ldots + \frac{1}{N+1} \right) \\
\sim 2(N+1) \int_3^{N+1} \frac{1}{x} \, dx
\]
- Finally, the desired result:
\[
C_N \sim 2(N+1) \ln N \approx 1.39N \log N
\]
Quicksort: summary of performance characteristics
Worst case. Number of compares is quadratic.
- \( N + (N - 1) + (N - 2) + \ldots + 1 \sim \frac{1}{2} N^2 \).
- More likely that your computer is struck by lightning bolt.
Average case. Number of compares is \( \sim 1.39N \lg N \).
- 39% more compares than mergesort.
- But faster than mergesort in practice because of less data movement.
Random shuffle.
- Probabilistic guarantee against worst case.
- Basis for math model that can be validated with experiments.
Caveat emptor. Many textbook implementations go \textit{quadratic} if array
- Is sorted or reverse sorted.
- Has many duplicates (even if randomized!)
QuickSort properties
**Proposition.** QuickSort is an **in-place** sorting algorithm.
**Pf.**
- Partitioning: constant extra space.
- Depth of recursion: logarithmic extra space (with high probability).
---
**Proposition.** QuickSort is **not stable**.
**Pf.**
<table>
<thead>
<tr>
<th>i</th>
<th>j</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>B$_1$</td>
<td>C$_1$</td>
<td>C$_2$</td>
<td>A$_1$</td>
</tr>
<tr>
<td>1</td>
<td>3</td>
<td>B$_1$</td>
<td>C$_1$</td>
<td>C$_2$</td>
<td>A$_1$</td>
</tr>
<tr>
<td>1</td>
<td>3</td>
<td>B$_1$</td>
<td>A$_1$</td>
<td>C$_2$</td>
<td>C$_1$</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>A$_1$</td>
<td>B$_1$</td>
<td>C$_2$</td>
<td>C$_1$</td>
</tr>
</tbody>
</table>
...can guarantee logarithmic depth by recurring on smaller subarray before larger subarray
Quicksort: practical improvements
Insertion sort small subarrays.
- Even quicksort has too much overhead for tiny subarrays.
- Cutoff to insertion sort for \( \approx 10 \) items.
- Note: could delay insertion sort until one pass at end.
private static void sort(Comparable[] a, int lo, int hi) {
if (hi <= lo + CUTOFF - 1) {
Insertion.sort(a, lo, hi);
return;
}
int j = partition(a, lo, hi);
sort(a, lo, j-1);
sort(a, j+1, hi);
}
Quicksort: practical improvements
Median of sample.
- Best choice of pivot item = median.
- Estimate true median by taking median of sample.
- Median-of-3 (random) items.
~ 12/7 \( N \ln N \) compares (slightly fewer)
~ 12/35 \( N \ln N \) exchanges (slightly more)
```java
private static void sort(Comparable[] a, int lo, int hi) {
if (hi <= lo) return;
int m = medianOf3(a, lo, lo + (hi - lo)/2, hi);
swap(a, lo, m);
int j = partition(a, lo, hi);
sort(a, lo, j-1);
sort(a, j+1, hi);
}
```
Quicksort with median-of-3 partitioning and cutoff for small subarrays: visualization
2.3 QUICKSORT
- quicksort
- selection
- duplicate keys
- system sorts
2.3 QUICKSORT
- quicksort
- selection
- duplicate keys
- system sorts
Selection
**Goal.** Given an array of $N$ items, find a $k^{th}$ smallest item.
**Ex.** Min ($k = 0$), max ($k = N - 1$), median ($k = N/2$).
**Applications.**
- Order statistics.
- Find the "top $k$."
**Use theory as a guide.**
- Easy $N \log N$ upper bound. How?
- Easy $N$ upper bound for $k = 1, 2, 3$. How?
- Easy $N$ lower bound. Why?
**Which is true?**
- $N \log N$ lower bound? is selection as hard as sorting?
- $N$ upper bound? is there a linear-time algorithm for each $k$?
Quick-select
Partition array so that:
- Entry $a[j]$ is in place.
- No larger entry to the left of $j$.
- No smaller entry to the right of $j$.
Repeat in one subarray, depending on $j$; finished when $j$ equals $k$.
```java
public static Comparable select(Comparable[] a, int k) {
StdRandom.shuffle(a);
int lo = 0, hi = a.length - 1;
while (hi > lo) {
int j = partition(a, lo, hi);
if (j < k) lo = j + 1;
else if (j > k) hi = j - 1;
else return a[k];
}
return a[k];
}
```
if $a[k]$ is here
set hi to $j-1$
if $a[k]$ is here
set lo to $j+1$
\[ \leq v \quad v \quad \geq v \]
\[ ^{lo} \quad ^{hi} \quad ^{j} \]
Quick-select: mathematical analysis
Proposition. Quick-select takes linear time on average.
Pf sketch.
• Intuitively, each partitioning step splits array approximately in half:
\[ N + N/2 + N/4 + \ldots + 1 \sim 2N \text{ compares.} \]
• Formal analysis similar to quicksort analysis yields:
\[
C_N = 2N + 2k \ln (N/k) + 2(N-k) \ln (N/(N-k))
\]
\[(2 + 2 \ln 2)N \text{ to find the median} \]
Remark. Quick-select uses \( \sim \frac{1}{2} N^2 \) compares in the worst case, but (as with quicksort) the random shuffle provides a probabilistic guarantee.
Theoretical context for selection
Abstract
The number of comparisons required to select the i-th smallest of n numbers is shown to be at most a linear function of n by analysis of a new selection algorithm -- PICK. Specifically, no more than $5.4305n$ comparisons are ever required. This bound is improved for extreme values of i.
**Remark.** But, constants are too high $\Rightarrow$ not used in practice.
**Use theory as a guide.**
- Still worthwhile to seek **practical** linear-time (worst-case) algorithm.
- Until one is discovered, use quick-select if you don’t need a full sort.
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
Duplicate keys
Often, purpose of sort is to bring items with equal keys together.
- Sort population by age.
- Remove duplicates from mailing list.
- Sort job applicants by college attended.
Typical characteristics of such applications.
- Huge array.
- Small number of key values.
Duplicate keys
Mergesort with duplicate keys. Between $\frac{1}{2} N \lg N$ and $N \lg N$ compares.
Quicksort with duplicate keys.
- Algorithm goes \textit{quadratic} unless partitioning stops on equal keys!
- 1990s C user found this defect in \texttt{qsort()}.
\begin{center}
\begin{tikzpicture}
\node at (0,0) {STOP ONE QUAL KEYS};
\node at (-1.5,-1) {swap};
\node at (0,-1) {if we don't stop on equal keys};
\node at (1.5,-1) {if we stop on equal keys};
\end{tikzpicture}
\end{center}
several textbook and system implementation also have this defect
Duplicate keys: the problem
**Mistake.** Put all items equal to the partitioning item on one side.
**Consequence.** $\sim \frac{1}{2} N^2$ compares when all keys equal.
```
B A A B A B B B C C C C
A A A A A A A A A A A A
```
**Recommended.** Stop scans on items equal to the partitioning item.
**Consequence.** $\sim N \lg N$ compares when all keys equal.
```
B A A B A B C C B C B B
A A A A A A A A A A A A
```
**Desirable.** Put all items equal to the partitioning item in place.
```
A A A B B B B B C C C C
A A A A A A A A A A A A A
```
3-way partitioning
**Goal.** Partition array into 3 parts so that:
- Entries between \( l_t \) and \( g_t \) equal to partition item \( v \).
- No larger entries to left of \( l_t \).
- No smaller entries to right of \( g_t \).

**Dutch national flag problem.** [Edsger Dijkstra]
- Conventional wisdom until mid 1990s: not worth doing.
- New approach discovered when fixing mistake in C library `qsort()`.
- Now incorporated into `qsort()` and Java system sort.
Dijkstra 3-way partitioning demo
- Let \( v \) be partitioning item \( a[lo] \).
- Scan \( i \) from left to right.
- \( (a[i] < v) \): exchange \( a[lt] \) with \( a[i] \); increment both \( lt \) and \( i \)
- \( (a[i] > v) \): exchange \( a[gt] \) with \( a[i] \); decrement \( gt \)
- \( (a[i] == v) \): increment \( i \)
\[ P \quad A \quad B \quad X \quad W \quad P \quad P \quad V \quad P \quad D \quad P \quad C \quad Y \quad Z \]
Invariant
\[ \begin{array}{c|c|c|c}
\leq v & = v & \text{gray} & > v \\
\downarrow & \uparrow & \uparrow & \downarrow \\
lt & i & gt
\end{array} \]
Dijkstra 3-way partitioning demo
- Let $v$ be partitioning item $a[lo]$.
- Scan $i$ from left to right.
- $(a[i] < v)$: exchange $a[lt]$ with $a[i]$; increment both $lt$ and $i$
- $(a[i] > v)$: exchange $a[gt]$ with $a[i]$; decrement $gt$
- $(a[i] == v)$: increment $i$
Dijkstra's 3-way partitioning: trace
3-way partitioning trace (array contents after each loop iteration)
3-way quicksort: Java implementation
```java
private static void sort(Comparable[] a, int lo, int hi) {
if (hi <= lo) return;
int lt = lo, gt = hi;
Comparable v = a[lo];
int i = lo;
while (i <= gt) {
int cmp = a[i].compareTo(v);
if (cmp < 0) exch(a, lt++, i++);
else if (cmp > 0) exch(a, i, gt--);
else i++;
}
sort(a, lo, lt - 1);
sort(a, gt + 1, hi);
}
```
Diagram showing the 3-way partitioning process: before, during, and after.
3-way quicksort: visual trace
equal to partitioning element
Duplicate keys: lower bound
**Sorting lower bound.** If there are \( n \) distinct keys and the \( i^{th} \) one occurs \( x_i \) times, any compare-based sorting algorithm must use at least
\[
l \lg \left( \frac{N!}{x_1! \, x_2! \cdots x_n!} \right) \sim - \sum_{i=1}^{n} x_i \lg \frac{x_i}{N}
\]
compares in the worst case.
**Proposition.** [Sedgewick-Bentley, 1997]
Quicksort with 3-way partitioning is **entropy-optimal**.
**Pf.** [beyond scope of course]
**Bottom line.** Randomized quicksort with 3-way partitioning reduces running time from linearithmic to linear in broad class of applications.
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
Sorting applications
Sorting algorithms are essential in a broad variety of applications:
- Sort a list of names.
- Organize an MP3 library.
- Display Google PageRank results.
- List RSS feed in reverse chronological order.
- Find the median.
- Identify statistical outliers.
- Binary search in a database.
- Find duplicates in a mailing list.
- Data compression.
- Computer graphics.
- Computational biology.
- Load balancing on a parallel computer.
...
Java system sorts
Arrays.sort().
- Has different method for each primitive type.
- Has a method for data types that implement Comparable.
- Has a method that uses a Comparator.
- Uses tuned quicksort for primitive types; tuned mergesort for objects.
Q. Why use different algorithms for primitive and reference types?
War story (C qsort function)
AT&T Bell Labs (1991). Allan Wilks and Rick Becker discovered that a qsort() call that should have taken seconds was taking minutes.
Why is qsort() so slow?
At the time, almost all qsort() implementations based on those in:
- Version 7 Unix (1979): quadratic time to sort organ-pipe arrays.
- BSD Unix (1983): quadratic time to sort random arrays of 0s and 1s.
Basic algorithm = quicksort.
- Cutoff to insertion sort for small subarrays.
- Partitioning scheme: Bentley-McIlroy 3-way partitioning.
- Partitioning item.
- small arrays: middle entry
- medium arrays: median of 3
- large arrays: Tukey's ninther [next slide]
Now widely used. C, C++, Java 6, ....
**Tukey's ninther**
**Tukey's ninther.** Median of the median of 3 samples, each of 3 entries.
- Approximates the median of 9.
- Uses at most 12 compares.
---
Q. Why use Tukey's ninther?
A. Better partitioning than random shuffle and less costly.
Achilles heel in Bentley-McIlroy implementation (Java system sort)
Q. Based on all this research, Java’s system sort is solid, right?
A. No: a killer input.
- Overflows function call stack in Java and crashes program.
- Would take quadratic time if it didn’t crash first.
```
% more 250000.txt
0
218750
222662
11
166672
247070
83339
...
```
```
% java IntegerSort 250000 < 250000.txt
Exception in thread "main"
java.lang.StackOverflowError
at java.util.Arrays.sort1(Arrays.java:562)
at java.util.Arrays.sort1( Arrays.java:606)
at java.util.Arrays.sort1( Arrays.java:608)
at java.util.Arrays.sort1( Arrays.java:608)
at java.util.Arrays.sort1( Arrays.java:608)
...
```
250,000 integers between 0 and 250,000
Java's sorting library crashes, even if you give it as much stack space as Windows allows
System sort: Which algorithm to use?
Many sorting algorithms to choose from:
**Internal sorts.**
- Insertion sort, selection sort, bubblesort, shaker sort.
- Quicksort, mergesort, heapsort, samplesort, shellsort.
- Solitaire sort, red-black sort, splaysort, Yaroslavskiy sort, psort, ...
**External sorts.** Poly-phase mergesort, cascade-merge, oscillating sort.
**String/radix sorts.** Distribution, MSD, LSD, 3-way string quicksort.
**Parallel sorts.**
- Bitonic sort, Batcher even-odd sort.
- Smooth sort, cube sort, column sort.
- GPUsort.
System sort: Which algorithm to use?
Applications have diverse attributes.
- Stable?
- Parallel?
- Deterministic?
- Keys all distinct?
- Multiple key types?
- Linked list or arrays?
- Large or small items?
- Is your array randomly ordered?
- Need guaranteed performance?
Elementary sort may be method of choice for some combination. Cannot cover all combinations of attributes.
Q. Is the system sort good enough?
A. Usually.
## Sorting summary
<table>
<thead>
<tr>
<th>inplace?</th>
<th>stable?</th>
<th>worst</th>
<th>average</th>
<th>best</th>
<th>remarks</th>
</tr>
</thead>
<tbody>
<tr>
<td>selection</td>
<td>✔</td>
<td>N² / 2</td>
<td>N² / 2</td>
<td>N² / 2</td>
<td>N exchanges</td>
</tr>
<tr>
<td>insertion</td>
<td>✔ ✔</td>
<td>N² / 2</td>
<td>N² / 4</td>
<td>N</td>
<td>use for small N or partially ordered</td>
</tr>
<tr>
<td>shell</td>
<td>✔</td>
<td>?</td>
<td>?</td>
<td>N</td>
<td>tight code, subquadratic</td>
</tr>
<tr>
<td>merge</td>
<td>✔ ✔</td>
<td>N lg N</td>
<td>N lg N</td>
<td>N lg N</td>
<td>N log N guarantee, stable</td>
</tr>
<tr>
<td>quick</td>
<td>✔</td>
<td>N² / 2</td>
<td>2 N lg N</td>
<td>N lg N</td>
<td>N log N probabilistic guarantee fastest in practice</td>
</tr>
<tr>
<td>3-way quick</td>
<td>✔ ✔</td>
<td>N² / 2</td>
<td>2 N lg N</td>
<td>N</td>
<td>improves quicksort in presence of duplicate keys</td>
</tr>
<tr>
<td>???</td>
<td>✔ ✔</td>
<td>N lg N</td>
<td>N lg N</td>
<td>N</td>
<td>holy sorting grail</td>
</tr>
</tbody>
</table>
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
2.3 Quicksort
- quicksort
- selection
- duplicate keys
- system sorts
|
{"Source-Url": "http://www.cs.princeton.edu/courses/archive/spring17/cos226/lectures/23Quicksort.pdf", "len_cl100k_base": 8797, "olmocr-version": "0.1.50", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 96290, "total-output-tokens": 9526, "length": "2e13", "weborganizer": {"__label__adult": 0.00030040740966796875, "__label__art_design": 0.0002415180206298828, "__label__crime_law": 0.00028824806213378906, "__label__education_jobs": 0.0005893707275390625, "__label__entertainment": 6.318092346191406e-05, "__label__fashion_beauty": 0.00013518333435058594, "__label__finance_business": 0.00013744831085205078, "__label__food_dining": 0.0003921985626220703, "__label__games": 0.001220703125, "__label__hardware": 0.0009860992431640625, "__label__health": 0.0003304481506347656, "__label__history": 0.0001989603042602539, "__label__home_hobbies": 9.882450103759766e-05, "__label__industrial": 0.00027251243591308594, "__label__literature": 0.00022125244140625, "__label__politics": 0.00017058849334716797, "__label__religion": 0.0003044605255126953, "__label__science_tech": 0.0139007568359375, "__label__social_life": 7.098913192749023e-05, "__label__software": 0.006710052490234375, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0003151893615722656, "__label__transportation": 0.0003113746643066406, "__label__travel": 0.00015687942504882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22088, 0.03301]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22088, 0.10667]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22088, 0.65229]], "google_gemma-3-12b-it_contains_pii": [[0, 71, false], [71, 591, null], [591, 1135, null], [1135, 1206, null], [1206, 1885, null], [1885, 2102, null], [2102, 2369, null], [2369, 2927, null], [2927, 3407, null], [3407, 3796, null], [3796, 3946, null], [3946, 4509, null], [4509, 5231, null], [5231, 6817, null], [6817, 6875, null], [6875, 7585, null], [7585, 8172, null], [8172, 8841, null], [8841, 9426, null], [9426, 9896, null], [9896, 10417, null], [10417, 10503, null], [10503, 10574, null], [10574, 10645, null], [10645, 11135, null], [11135, 11804, null], [11804, 12369, null], [12369, 13096, null], [13096, 13167, null], [13167, 13238, null], [13238, 13522, null], [13522, 14079, null], [14079, 14627, null], [14627, 15131, null], [15131, 15727, null], [15727, 16004, null], [16004, 16110, null], [16110, 16629, null], [16629, 16690, null], [16690, 17300, null], [17300, 17371, null], [17371, 17442, null], [17442, 17902, null], [17902, 18222, null], [18222, 18616, null], [18616, 18922, null], [18922, 19172, null], [19172, 19992, null], [19992, 20541, null], [20541, 20970, null], [20970, 21947, null], [21947, 22018, null], [22018, 22088, null]], "google_gemma-3-12b-it_is_public_document": [[0, 71, true], [71, 591, null], [591, 1135, null], [1135, 1206, null], [1206, 1885, null], [1885, 2102, null], [2102, 2369, null], [2369, 2927, null], [2927, 3407, null], [3407, 3796, null], [3796, 3946, null], [3946, 4509, null], [4509, 5231, null], [5231, 6817, null], [6817, 6875, null], [6875, 7585, null], [7585, 8172, null], [8172, 8841, null], [8841, 9426, null], [9426, 9896, null], [9896, 10417, null], [10417, 10503, null], [10503, 10574, null], [10574, 10645, null], [10645, 11135, null], [11135, 11804, null], [11804, 12369, null], [12369, 13096, null], [13096, 13167, null], [13167, 13238, null], [13238, 13522, null], [13522, 14079, null], [14079, 14627, null], [14627, 15131, null], [15131, 15727, null], [15727, 16004, null], [16004, 16110, null], [16110, 16629, null], [16629, 16690, null], [16690, 17300, null], [17300, 17371, null], [17371, 17442, null], [17442, 17902, null], [17902, 18222, null], [18222, 18616, null], [18616, 18922, null], [18922, 19172, null], [19172, 19992, null], [19992, 20541, null], [20541, 20970, null], [20970, 21947, null], [21947, 22018, null], [22018, 22088, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22088, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22088, null]], "pdf_page_numbers": [[0, 71, 1], [71, 591, 2], [591, 1135, 3], [1135, 1206, 4], [1206, 1885, 5], [1885, 2102, 6], [2102, 2369, 7], [2369, 2927, 8], [2927, 3407, 9], [3407, 3796, 10], [3796, 3946, 11], [3946, 4509, 12], [4509, 5231, 13], [5231, 6817, 14], [6817, 6875, 15], [6875, 7585, 16], [7585, 8172, 17], [8172, 8841, 18], [8841, 9426, 19], [9426, 9896, 20], [9896, 10417, 21], [10417, 10503, 22], [10503, 10574, 23], [10574, 10645, 24], [10645, 11135, 25], [11135, 11804, 26], [11804, 12369, 27], [12369, 13096, 28], [13096, 13167, 29], [13167, 13238, 30], [13238, 13522, 31], [13522, 14079, 32], [14079, 14627, 33], [14627, 15131, 34], [15131, 15727, 35], [15727, 16004, 36], [16004, 16110, 37], [16110, 16629, 38], [16629, 16690, 39], [16690, 17300, 40], [17300, 17371, 41], [17371, 17442, 42], [17442, 17902, 43], [17902, 18222, 44], [18222, 18616, 45], [18616, 18922, 46], [18922, 19172, 47], [19172, 19992, 48], [19992, 20541, 49], [20541, 20970, 50], [20970, 21947, 51], [21947, 22018, 52], [22018, 22088, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22088, 0.1059]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
202c8d02be9ac92ca3a030597910171ce56a8572
|
Design Flow for the Rapid Development of Distributed Sensor Network Applications
Alexios Lekidis, Paraskevas Bourgos, Simplice Djoko-Djoko, Marius Bozga, Saddek Bensalem
Verimag Research Report n° TR-2014-13
October 8, 2014
Design Flow for the Rapid Development of Distributed Sensor Network Applications
Alexios Lekidis, Paraskevas Bourgos, Simplice Djoko-Djoko, Marius Bozga, Saddek Bensalem
October 8, 2014
Abstract
The exponential increase in the demands for the deployment of large-scale sensor networks, makes necessary the efficient development of functional applications. Nevertheless, the existence of scarce resources and the derived application complexity, impose significant issues and require high design expertise. Consequently, the probability of discovering design errors once the application is implemented is considerably high. To address these constraints there is a need for the availability of early-stage validation, performance evaluation and rapid prototyping techniques at design time. In this paper we present a novel approach for the co-design of mixed software/hardware applications for distributed sensor network systems. This approach uses BIP, a formal framework facilitating modeling, analysis and implementation of embedded real-time, heterogeneous, component-based systems. Our approach is illustrated through the modeling and deployment of a Wireless Multimedia Sensor Network (WMSN) application. We emphasize on its merits, notably validation of functional and non-functional requirements through statistical model-checking and automatic code generation for sensor network platforms.
Keywords: Wireless Sensor Networks, Model-based development, Multimedia Communication, Clock synchronization, Performance evaluation
Reviewers: Marius Bozga
How to cite this report:
@techreport {TR-2014-13,
title = {Design Flow for the Rapid Development of Distributed Sensor Network Applications},
author = {Alexios Lekidis, Paraskevas Bourgos, Simplice Djoko-Djoko, Marius Bozga, Saddek Bensalem},
institution = {Verimag Research Report},
number = {TR-2014-13},
year = {2014}
}
Contents
1 Introduction 1
2 Sensor network applications 2
3 Design Flow 3
3.1 Pragmatic Programming Model 5
3.2 System model in BIP 8
3.3 Code Generation 9
4 Case Study: Industrial WMSN Application 9
4.1 Code Generation on Distributed Sensor Network Platform 10
4.2 BIP System Model 10
4.3 Analysis and experimental results 14
5 Conclusions 15
Appendices 18
A Kalman filter algorithm 18
1 Introduction
The introduction of sensor networks in various application fields nowadays has been a significant technological advance. Such fields include health-care, transportation, agriculture, environmental monitoring, security systems, high-energy physics, industrial process control, factory and building automation and more. The applications of distributed sensor networks are broad due to the unique characteristics of the sensor devices, from which they are composed. Each sensor is a tiny, low-cost, low-power, energy harvesting, multifunctional device. Being usually deployed in a large-scale distributed environment, it needs to configure itself automatically, in order to collect, process and send information to a central processing unit, called base station or sink. The transmission is handled by the underlying network, which can be either wired or wireless. The use of wireless networks is often preferred over wired, due to the derived limitations from the cost of wiring.
The development of functional applications, ensuring the several benefits of sensor networks, is however extremely challenging. This is due to their scarce resources, imposing constraints such as the limitations in the communication cost, the energy consumption, the memory usage and the achievable network bandwidth. These limitations are enhanced as they are usually deployed in inaccessible or distant areas (e.g. mountains, forests) and thus cannot be frequently changed in case of a failure. In addition, specific applications have strict timing constraints for data handling, which may not be guaranteed due to the influence of the communication and data processing latencies. Equally important is to consider that design errors in the final application development stage are highly probable, even if there is detailed knowledge of the application area and the hardware platforms. Moreover, if an error is observed at that stage, the debugging is extremely hard and time-consuming.
To address these challenges we propose a model-based design approach, in order to express the behavior and functionality of such applications. A model-based framework improves the quality, the modularity and reusability of the developed software artifacts. It can further allow separation of concerns, in order to describe software and hardware architecture at a certain level of abstraction. Thus, any change within the application results only in the modification of the software architecture. Furthermore, validation and verification are enabled in every development stage. The overall contribution of this work is the construction of a full-fledged design flow, based on a single semantic framework (BIP [2]), facilitating the rapid development of correct and functional sensor network applications. This flow supports application and system modeling, validation of functional correctness and performance analysis on system models. It also permits
automatic code generation in distributed sensor network platforms, leading to a significant reduction in the development time and errors of a manual implementation.
The paper is organized as follows. Section 2 provides a brief introduction to the area and the current challenges of distributed sensor network applications. Section 3 presents the proposed design flow and details on its key steps. Section 4 illustrates its use in a concrete WMSN application and Section 5 provides conclusions and perspectives for future work.
2 Sensor network applications
A major design factor in the development of sensor network applications is the communication, in order to exchange sensed data. As each network node is a resource-constrained device, the developed applications should have low bandwidth demands and tolerance to the communication latencies. Recently, the significant size reduction of inexpensive hardware, such as microphones and cameras, made possible the addition of audio and video capabilities for multimedia applications on a sensor network environment [16]. The development of such applications is mainly based on the increasingly popular lightweight versions of Linux, often referred to as embedded Linux [11]. This is due to their open-source environment and the support of several off-the-shelf platforms. Multimedia sensor network applications have strict timing constraints for data delivery and are extremely demanding in terms of memory and storage. The latter make necessary the usage of compression algorithms. An example of such an application deployed over a wireless network for audio streaming and synchronization of the local sensors clocks is provided in Figure 1.

The main arising challenge in the successful development of correct and functional distributed sensor network applications is to provide productive and efficient design solutions ensuring the three following goals:
The addressing of functional and non-functional requirements. This goal focuses in the ability to identify and the methods to evaluate these requirements at design time ([6]). On the one hand, non-functional requirements concern the optimal exploitation of the available hardware resources. This is accomplished by limiting the communication cost, memory usage and energy consumption as well as reducing the resource failure rate. A first example for such a requirement are the delays imposed by the processing time or the communication latency, which may lead to the reception of outdated sensor data. As an outcome, adverse actions may be triggered in the network. Secondly, is the network connectivity, determining the packet delivery ratio, that is, the percentage of successfully received packets by the total packets transmitted in the network. On the other hand, functional requirements concern the correctness and performance of the application. More specifically, they aim on managing buffer utilization, improving the efficiency of the compression algorithms for the multimedia and providing strict time guarantees for data handling. It should be also noted that in some situations non-functional can affect the functional requirements, as depicted by the strong influence of the communication latency and the packet delivery ratio to the buffer utilization.
The synchronization of the local sensors clocks (clock synchronization). In many applications, the exchanged data need to be accurately timestamped, in order to be further processed. Nonetheless, this poses...
Alexios Lekidis, Paraskevas Bourgos, Simplice Djoko-Djoko, Marius Bozga, Saddek Bensalem
A serious application development problem, as the construction of a common time reference in a distributed system is hard to achieve. The common time reference can be also used to measure the duration between two events occurring in different nodes, whose clocks can drift or become desynchronized over time. Several solutions to this problem were proposed, in order to obtain a global time reference in the system. The commonly obtained synchronization accuracy is considered to be in the microsecond scale. A traditionally adopted solution is the Network Time Protocol (NTP) [23], which nevertheless requires increased computational power and storage memory, since it uses extra messages to calculate the Round Trip Delay (RTD). Additionally, the use of several trials to compute the average RTD results in less accuracy, further overhead and thus is suitable only for applications with low precision demands. A better protocol, also relying on the RTD calculation, which achieves high synchronization accuracy in both wired and wireless sensor networks, is the Precision Time Protocol (PTP) [14]. However, the derived hardware enhancements (as in [15]) introduced to achieve microsecond accuracy may not be available in lightweight and resource-constrained environments. A new family of protocols for software-based clock synchronization is derived from the application of the Kalman filter algorithm [8]. Compared to the other synchronization protocols, this family does not require the interaction or the development of dedicated drivers to access the hardware, since it is operating in the application level. The underlying Kalman filter algorithm relies on tracking the advance of a reference clock and automatically adapting to it. The synchronization method used by this family is different from the above protocols, since it does not rely on the RTD calculation.
Tools for application development and code debugging. As multimedia sensor network applications require the dense deployment of the small-scaled sensors, the communication latencies and the conflicts occurring in the protocol stack are unpredictable. Therefore, the probability of having design errors in the final development stage is extremely high. This situation may arise even if the developer has complete knowledge of the application as well as the underlying hardware architecture. Moreover, the debugging techniques at that stage are extremely hard and time consuming, even for experts. Consequently, an error may possibly lead to a new system implementation. This happens due to the absence of separation of concerns, such that the application is developed independently from the hardware architecture. In this scope, a developer has to specify and build separate artifacts for the software and the hardware architecture, which could also be reused in latter applications. Then, he should be able to define the optimal methodology for the deployment of the application on the given architecture, such that it functions properly. This procedure is called as mapping [5].
Meeting all the aforementioned goals, is extremely demanding. A starting point to this challenge would be the availability of simulation and validation tools in the early development stage, such that the system is validated beforehand and the design goals are ensured. Previous work in this scope is mainly divided in three categories. The first category uses the Mathwork's tools for modeling, simulation and automatic code generation targeting specific sensor network operating systems [17] [18]. These tools are well known due to their vast variety of libraries, however they are not able to address functional and non-functional system requirements. Secondly, the metamodelling frameworks addressing such requirements use the UML tools to model and the Eclipse platform to generate code for sensor network applications [21]. Though certain developed frameworks ([1]) are also able to validate them, they do not focus on clock synchronization and the generated code is usually not complete. Finally, formal modeling approaches for such applications provide validation support for functional and non-functional requirements [22] [25] [12] as well as clock synchronization [9], but do not implement tools for automatic code generation. Therefore, as far knowledge is concerned, the existing work is not considering all the above design goals simultaneously. To this extent, in the following section we propose a novel method for the systematic development of distributed sensor network applications, enabling separation of concerns and targeting all their design goals.
3 Design Flow
In this section, we propose a novel approach for building sensor network applications. This approach is based on a design flow, which leads to a framework for 1) the construction of a faithful sensor network system model for analysis as well as performance evaluation and 2) the generation of deployable code for applications in the domain of sensor networks. The design flow is based on the BIP framework described below.
The BIP – Behavior / Interaction / Priority – framework [2] is aiming at design and analysis of complex, heterogeneous embedded applications. BIP is a highly expressive, component-based framework with rigorous semantic basis. It allows the construction of complex, hierarchically structured models from atomic components characterized by their behavior and interfaces. Such components are transition systems enriched with data. Transitions are used to move from a source to a destination location. Each time a transition is taken, component data (variables) may be assigned new values, computed by user-defined functions (in C/C++). Atomic components are composed by layered application of interactions and priorities. Interactions express synchronization constraints and define the transfer of data between the interacting components. Priorities are used to filter amongst possible interactions and to steer system evolution so as to meet performance requirements e.g., to express scheduling policies. A set of atomic components can be composed into a generic compound component by the successive application of connectors and priorities.
BIP is supported by a rich toolset\(^1\) which includes tools for checking correctness, for source-to-source transformations and for code generation.
**Example 1** Figure 2 shows a graphical representation of one atomic component in BIP, which models the behavior of the PLL process (presented in Section 3.1). The behavior of PLL is described as a transition system with control locations idle, recvMsg, process and sndRes. It is responsible for the reception of synchronization frames through the CLK_RECV port. It subsequently moves from the idle to the recvMsg state. After an interaction through the port LOCAL_CLK, it calculates a software clock through the internal port update and returns to the initial (idle) state. CLK_REQ port is used to receive requests for calculating the local clock. The value of the local clock is calculated at the internal transition prepare and is exported through port CLK_RES.

A statistical method was recently proposed to handle scalability issues present in numerical methods that are classically used to check stochastic systems. This novel technique is called Statistical Model Checking (SMC) [26] [10]. It requires, as in classical model checking, building an operational formal model of the system to verify and to provide a formal specification of the property to check, generally using temporal logic. The BIP framework is extended to allow stochastic modeling and statistical verification [4]. On the one hand, the method relies on BIP expressiveness to handle heterogeneous and complex component-based systems. On the other hand it uses statistical model checking techniques to perform quantitative verification targeting non-functional properties.
The BIP design flow, illustrated in Figure 3, uses PPM specifications, thoroughly described in Section 3.1, as a re-targetable input model to: (1) automatically generate a sensor network system model in BIP and (2) automatically generate the code for execution on the target distributed sensor network platform. The proposed flow is used to evaluate both functional, non-functional and clock synchronization requirements of sensor network applications. To achieve that, on the one hand, we apply SMC on the system
\(^1\)http://www-verimag.imag.fr/tools
model in BIP and on the other hand, we execute the generated code on the target sensor network platform. It is important to mention that the two paths, meaning the construction of the system model in BIP and the generation of executable code are consistent between each other. This is accomplished because, first, both approaches integrally preserve the behavior of the input application software and, second, the Sensor Network Components in BIP faithfully model the target sensor network.
The proposed design flow proceeds in four main steps:
1. The construction of an abstract system model. This model represents the behavior of the application software running on the hardware platform according to the mapping, but without including all hardware dependent (e.g. execution times, data processing delays) and network-specific information (e.g. packet delivery ratio, end-to-end delays).
2. The generation of executable code that is deployed on the physical hardware platform. This is performed by initially transforming the input hardware specifications into code templates. Once these templates are fully constructed by the user, they can be reused for any sensor network application. They are accordingly parametrized, using node configuration files, in order to automatically generate the executable code.
3. The construction of the system model in BIP by injecting all the missing hardware dependent information to the previously generated abstract system model.
4. The performance analysis on the calibrated system model in BIP with the use of Statistical Model Checking (SMC) that performs quantitative verification targeting functional and non-functional requirements. The results are used as a feedback to the user to propose enhancements in the design.
### 3.1 Pragmatic Programming Model
The Pragmatic Programming Model (PPM) is a description language developed to provide a simple and convenient way for describing highly-parallel applications expressed as networks of communicating processes. The language has been inspired by DOL (Distributed Operation Layer) [24], which is a framework
devoted to the specification as well as the analysis of mixed software/hardware systems and provides a
Kahn Process Network (KPN) model of the application.
In PPM, application software is defined using a process network model. It consists of a set of determin-
istic, sequential processes communicating asynchronously through shared objects, such as FIFOs, shared
memories and mutexed locations. The mapping associates application software components to devices of
the hardware platform, that is, processes to processors and shared objects to remote communication me-
dia. Specifications of the latter, including communication interface and protocols, are also described in
the mapping to provide all the necessary details for the code generation and the construction of the system
model.
WMSN Application
In Figure 4 we present an WMSN application in PPM, referring the application described in Section 4.
It consists of 1) one clock synchronization process synchro, sending out synchronization data through
the FIFOs (SO1, SO3), and 2) two audio capturing processes micro, sending out audio data, through the
FIFOs (SO2, SO4). The synchronization data are received by two processes PLL (implementing the clock
synchronization protocol) and the audio data by an audio reproduction process speaker.
Application Software in PPM
The application software in PPM consists of three basic entities: Processes, Shared Objects, and Con-
nections. The network structure is described in XML. Each Process has input, output ports and sequential
behavior. Processes communicate by using shared objects. Each shared objects has input and output ports,
uniquely associated with ports of processes.
In Figure 5, we present a fragment of the XML specification of the WMSN application described below.
It consists of processes, shared objects and connections. In Figure 5, we depict the PLL process. For each
process, we specify the name of the process, the number of input and output ports, the names of the ports,
the respective types and the location of the source C code describing the process behavior. For each shared
object (i.e FIFO) we specify the name, the type the maximum capacity of data and the input and output
port. Finally, we define the connections between the processes and the shared objects by specifying the
input and output ports which contribute in each connection.
Process behavior is described using sequential C programs with a particular structure (see Figure 6 for
a concrete example). For a cyclic process as P, its state is defined as an arbitrary C data structure named
P_state and its behavior as the program:
\[
PInit(); \quad \text{while} (true) PFire();
\]
where \(PInit()\), \(PFire()\) are arbitrary functions operating on the process state. The initial call of the \(PInit()\)
function is followed by an endless loop calling the \(PFire()\) function. Communication is realized by using
two particular primitives, namely write and read for respectively sending and receiving data to shared
objects. A read operation reads data from an input port, and a write operation writes data to an output port. Moreover, the P.fire() method may invoke a detach primitive in order to terminate the execution of the process.
Example 2 The description of a PLL process is shown in Figure 6. It defines the function pll_init() to initialize the process state and the function pll_fire() to describe the cyclic behavior of the process. PLL process receives data from the process network using the FIFO_read() function and the rest of the code implements the synchronization algorithm (pll_clock_in() function).
```c
#include "pll_process.h"
void pll_init(pll_process *p) {
(p->local->pll).stream_size = 1;
(p->local->pll).block_size = (unsigned int) sizeof(clockOut_t);
(p->local->pll).data_in = malloc((p->local->pll).block_size);
p->local->data_size = (p->local->pll).block_size;
}
int pll_fire(pll_process *p) {
FIFO_read(p->in, (p->local->pll).data_in, (p->local->pll).block_size);
gettimeofday ( &(p->local->slave_time), NULL );
uint64_t slave_clock = ((uint64_t) p->local->slave_time.tv_sec * \
(uint64_t) 1000000) + (uint64_t) p->local->slave_time.tv_usec;
clockOut_t* master_frameClock = (clockOut_t*) (p->local->pll).data_in;
master_clock = master_frameClock->time;
pll_clock_in ( slave_clock, master_clock, p->local->argument);
return 0;
}
```
Figure 6: PLL Process Code Description
Application Mapping on the Platform
The deployment of the use case applications on the target platform is specified with the use of a mapping XML description file, as presented in Figure 7. The application processes (“app-node” in XML) are bound to a hardware platform node (“hw-element” in XML). The binding (“deployment” in XML) includes additional information, concerning the hardware platform (“hw-property”), that are necessary for the configuration for establishing communication between the network nodes. This information includes the network interface name, the IP addresses of the destination network node, the port specification and the type of communication used (unicast, multicast and broadcast). The “communication protocol” globally used and “extra” process properties (“app-property”) are specified in separate XML elements.
Example 3
The description of the mapping XML file of the WMSN application is shown in Figure 7. The first “deployment” element specifies that the PLL process is deployed on the “udoo” hardware node using “wlan0” as network interface, “10.0.0.14” as destination IP address and 375, 250 as origin and target port respectively. The second “deployment” binds the synchro process to a second “udoo” hardware node. The use of the UDP communication protocol is defined next, followed by extra application properties such as the clock synchronization periods.
```xml
<deployment>
<app-node name="pll"/>
<hw-element name="node" hw-class="udoo" index="0"/>
<hw-property name="networkInterface" hw-class="node-inter" value="wlan0"/>
<hw-property name="srcPort" hw-class="node-srcPort" value="375"/>
<hw-property name="dstPort" hw-class="node-dstPort" value="250"/>
<hw-property name="dstIP" hw-class="node-dstIP" value="10.0.0.14"/>
</deployment>
<deployment>
<app-node name="synchro"/>
<hw-element name="node" hw-class="udoo" index="1"/>
<hw-property name="networkInterface" hw-class="node-networkInterface" value="wlan0"/>
<hw-property name="srcPort" hw-class="node-srcPort" value="250"/>
<hw-property name="multiIP" hw-class="node-multiIP" value="10.0.0.255"/>
<hw-property name="broadcast" hw-class="node-broadcast" value="0"/>
</deployment>
<communication protocol="udp"/>
<extra>
<app-property app-name="synchro" property-name="period" value="1"/>
</extra>
```
Figure 7: WMSN Application Mapping XML Description
3.2 System model in BIP
In our design flow, we construct the system model in BIP to faithfully represent the behavior of the application running on the underlying hardware and network. The construction proceeds in two steps, as presented in the design flow. The first step is the construction of the intermediate abstract system model in BIP and the second step is the construction of the complete system model in BIP.
The abstract system model in BIP is constructed in several steps. Firstly, the application software model in BIP is constructed by performing transformations to the application software. These transformations and proven correct-by-construction [5] by preserving all the functional properties of the application software. Secondly, HW specific components are constructed systematically from the characteristics of the sensor network platforms as well as the entities and communication mechanisms of the network protocols. As an example, the model of the wireless network includes specific details as the collision detection and avoidance techniques of the MAC layer, the out-of-order delivery and the packet losses due to possible collisions.
or reduction of the network bandwidth. Finally, the derived application software model is progressively enriched with the HW specific components, given a specified mapping.
The generation of the application software model in BIP, presented in [5], receives as input an application software model described in PPM and produces the equivalent representation in a BIP model. The construction is fully automated and preserves the behavior of the software application. Thus, the generated BIP models inherit all the merits of PPM models which enable separate analysis of computation and communication, expose functional parallelism and separate the functionality of the application from the target hardware platform.
The derived abstract system model in BIP is parametrized and allows flexible integration of specific target hardware features, such as communication protocols, scheduling policy etc. However, the abstract system model in BIP does not include all the hardware-dependent (e.g. execution times, data processing delays) and network-specific information (e.g. packet delivery ratios, end-to-end delays). The above information are injected to the model in the form of probabilistic distributions which are obtained by profiling techniques and execution of the generated code on the physical hardware platform. To compute these probabilistic distributions, we analyze the debugging traces from the execution of the generated code on the hardware platform and produce stochastic independent data [19] [13]. This technique is called calibration and results in obtaining the complete system model in BIP.
3.3 Code Generation
In this section, we describe the method and the associated tool for automatic generation of deployable code, targeting distributed sensor networks. The method is based on an infrastructure for generating code from PPM specifications. The generated code is portable and can be eventually deployed and run on different hardware including sensor networks. The generated code consists of the functional code and the glue code.
The functional code is generated from the application software in PPM consisting of processes and shared objects. In the case of sensor networks, processes are implemented as threads, and shared objects are implemented according to the underlying communication protocols. The implementation in C contains the thread local data and the routine implementing the specific thread functionality. The latter is a sequential program consisting of plain C used as a controller, wrapping the process C code described in PPM. The communication function calls are implemented by substituting the read and write primitives by read and write API calls on the respective communication protocol.
The glue code implements the deployment of the application to the sensor network platforms, i.e., allocation of threads to the sensors. The glue code is essentially obtained from the mapping. Threads are created and allocated to network nodes according to the process mapping, which also specifies configuration parameters for the underlying communication protocols. In particular, for User Datagram Protocol (UDP), each process is assigned a source port (srcPort), a destination (dstPort) port and a destination node IP (dstIP). The glue code is linked with sensor network hardware library to produce the binary executables for execution on the sensor network nodes.
The generated code is described in C language. Both functional and glue code are implemented using re-targetable template files and sensor network hardware specific files. The tool is implemented in C++ and it consists of approximately 35 files and 11235 lines of code.
4 Case Study: Industrial WMSN Application
We illustrate our approach using a case study provided by an industrial partner (Cyberio 2). It targets on audio capturing and reproduction over a WiFi wireless network with the addition of local clock synchronization. In this case we focus on a sender-to-receiver synchronization, where the base station broadcasts periodically a frame containing the hardware clock value (synchro process of Figure 8) to all the nodes through the wireless network. Each node applies a Phase Locked Loop (PLL [20]) synchronization technique, to construct a software clock. The PLL system takes as input the broadcasted clock and keeps the local clock synchronized to it. The construction is based on the Kalman filter algorithm (Appendix A). The
2www.cyberio-dsi.com/
expected synchronization accuracy, defined as the difference between the input and output clock, for the particular case study is specified as 1µs. The resulting clock is used by the micro process to timestamp the audio frames. Subsequently, the base station is able to reproduce the received audio frames in the correct chronological order.
**Sensor Network Platform Description**
We target as platform a Wireless Sensor Network (WSN) of spatially distributed autonomous sensors. They are responsible of monitoring sound, referred as slave nodes, and cooperatively pass their data through the network to a base station, referred as the master node.
The wireless network (WLAN) provides the ability of bidirectional communication between all the network nodes for audio handling and clock synchronization. Thus, the choice of the master node is completely arbitrary. In addition, the WLAN is based on the IEEE 802.11 standards (WiFi).
Each network node is a hardware platform, which consists of the computational core, the WiFi and the sound card. The computational core is responsible for the node’s processing operations, the WiFi card supports the wireless communication of the network, and the sound card is dedicated to capture or reproduce sound.
In the specific case study, we use a WSN that consists of three network nodes, as represented graphically in the lower part of Figure 8. As network nodes we use 3 UDOO platforms and as Access Point (AP) we use a Snowball SDK platform. To capture and reproduce audio samples, we used the API provided by the Advanced Linux Sound Architecture (ALSA). This API supplies structures and functions to communicate with the node’s sound card through the ALSA library.
In the following subsection we present the mapping that is used for the deployment of a WMSN application to different hardware nodes.
### 4.1 Code Generation on Distributed Sensor Network Platform
As depicted by the deployment of Figure 8 the clock synchronization protocol runs in parallel with an audio application. The synchro and speaker processes are mapped to the Master UDOO node, whereas the PLL and micro processes to the Slave UDOO nodes. The shared objects are mapped to the WiFi cards, which are managing the communication through the Snowball SDK AP. The sensor network nodes can communicate through various modes, such as unicast, broadcast and multicast. They also support additional communication protocols, apart from UDP, such as the raw Socket protocol.
We hereby present some experimental results obtained from the generated code for the case study. The results focus on the clock synchronization accuracy of a slave node. Specifically, in Figure 9 we plot the time difference between the Master and the software clock computed in the PLL of the Slave. The software clock follows the advance of the Master clock and maintains a relative offset from it (here around 100µs) with a resulting accuracy of 76µs. As illustrated in [20], in a PLL-based approach this offset depends on the synchronization frequency of the application. Although an increase of this frequency results in better synchronization, it simultaneously increases the number of transmitted packets in the network. This leads to higher energy consumption, thus shortening the network lifetime.
The execution of the generated code also provided debugging traces, which we analyzed, in order to compute probabilistic distributions for specific case study parameters. These parameters concerned the computation of each local hardware clock, the packet delivery ratio and the end-to-end delays. The debugging traces were used to calibrate the BIP abstract system model and produce the BIP system model (design flow step 3), thoroughly described in the following subsection.
### 4.2 BIP System Model
This section presents the system model constructed for the WMSN case study. It consists of the Master component and two instances of the Slave component, using the same interfaces and interactions with the
other system components. For comprehension purposes, Figure 10 illustrates a simpler system containing only one instance of the Slave component. The Master is responsible for periodical transmission of synchronization packets containing its hardware clock value through the port \texttt{CLK\_SEND}. This value, as well as the Slave’s hardware clock value, are obtained using probabilistic distributions for the Gaussian random variables of the discrete clock model (see Appendix A). The timing model is as a discrete time step advance and associated with the interaction \texttt{TICK}. This interaction is used as a strong synchronization among all the system components, implementing a timing model. The transmitted and received packets are stored in a buffer component (\texttt{Mbuffer} and \texttt{Sbuffer} instances of Figure 10), which follows a FIFO queuing policy. The processing and transmission of the data is handled by the WiFi component, modeling the wireless network (WiFi unit of Figure 8), and responsible for the packet transmission to every Slave component in the model. This component is using probabilistic distributions for network-specific characteristics, such
as the packet delivery rate and the end-to-end delays. Whenever a synchronization packet is received by the Slave component (CLK_RECV port), it computes the synchronized clock of the Kalman algorithm (see Appendix A). Each audio packet is transmitted through the AUDIO_SEND port and timestamped with the latest computed value of the synchronized clock.
Component behavior
The transmission of synchronization packets is initiated by the Master compound component in the model, formed by the Mclock, the synchro and the speaker atomic components. The Mclock (Figure 11a) models the behavior of the Master’s hardware clock. The synchro component is responsible for the periodical transmission of synchronization packets and the speaker component for the consumption and playout of the received audio packets. The Mclock component (Figure 11a) consists of the initial state idle and the transmit state. It periodically triggers the transmission of packets through an interaction with the synchro component. The time needed for the generation of packets ($P_{SYNC}$) is fixed and thus considered as a model parameter. An interaction through the port TICK will result in a time progress equal to one (tick) unit. When the time is equal to $P_{SYNC}$, the control moves from idle to the transmit state due to the corresponding guard. Following the interaction involving its SEND port, the current hardware clock value is forwarded to the synchro component. This value is computed using probabilistic distributions for the discrete clock model of the Master. The speaker component starts to reproduce the received audio samples periodically ($P_{P}$ period) through the port READ after an initial playout delay $p_1$.
The WiFi component (Figure 12) is formed by two parts. The first concerns the reception of the transmitted frame by the Master component and the second, the response time calculation as well as the transmission of a frame to the Sbuffer component. We accordingly consider packets that lost or delivered out-of-order as failed transmissions. Consequently, in the model every frame received through the RECV port, is either successfully transmitted (success state) or discarded if delayed or lost (degraded state). The number of consecutive successful or failed packet transmissions is chosen by corresponding probabilistic distributions ($\lambda_{ok}$ and $\lambda_{fail}$ respectively). If a frame is received in the success state through the RECV port, it is stored in a FIFO queue and the value of successful packet transmissions is decreased. The frame’s transmission time is accordingly chosen by the end-to-end delay distribution ($\lambda_{delay}$). Afterwards, the control moves to the second part, where the time advances through the TICK port. Whenever the transmission time of a frame in the queue is reached, it is forwarded to the Sbuffer component through the SEND port. In the meantime, if the chosen number of consecutive successful transmissions is equal to zero, the component moves from success to the degraded state. Accordingly, a value from the distribution of failed transmissions is chosen. This value indicates the number of subsequent frames, received through the RECV port, that are discarded. The WiFi component only returns to the success state, when it becomes equal to zero again.
The Slave compound component consists of three atomic components: The Micro, the Sclock and the PLL. The Micro component is responsible of capturing and transmitting periodically audio samples. Additionally, the Sclock component is modeling the hardware clock of the Slave and the PLL component (previously presented in Figure 2) computes the synchronized clock of the Kalman filter algorithm. In order to model the Sclock component (Figure 11b) we use the same method with the Mclock component constructing a probabilistic distribution for the discrete clock model of the Slave. Furthermore, the PLL component receives the transmitted synchronization packets from the Master and updates the synchronized clock. To accomplish that, it needs to interact with the Sclock component receiving its local clock (LOCAL_CLK port), in order to apply the PLL functions of the real application. It is also polled periodically by the Micro component (CLK_REQ port), in order to add a hardware clock value to each audio packet scheduled for transmission. The corresponding reply (CLK_RES port) contains the latest computed synchronized clock value augmented by the time elapsed between the last reception of a packet and the received request. Both are measured through an interaction with the Sclock component (LOCAL_CLK port). The Micro component generates each audio packet periodically (PM period).
Figure 11: Hardware clock components of the Master and the Slave
Figure 12: WiFi component
In the following subsection we report on the experimental results from the analysis of the BIP system model (step 4 of the design flow), obtained by the simulations and the use of SMC.
4.3 Analysis and experimental results
We conducted two sets of experiments, focusing on equally important requirements in the development of multimedia sensor networks. The first analyzed the utilization of the buffer components concerning only the audio capturing and reproduction in the system. Thus, this experiment focused on functional requirements, influenced by non-functional such as the packet delivery ratio and the end-to-end delays. In the second we focused on the synchronization of the device clocks. Therefore, we observed the difference between the Master clock ($\theta_M$) and the synchronized clock computed in every Slave ($\theta_S$) without the impact of the audio capturing and reproduction. In order to evaluate these requirements we describe them with stochastic temporal properties using the Probabilistic Bounded Linear Temporal Logic (PBLTL) formalism [4] and detail on their probabilistic results using the SMC tool of the BIP framework.
**Experiment 1: Buffer utilization.** We evaluated the property of avoiding overflow or underflow in each buffer component by considering the following properties: $\phi_1 = (S_{buffer} < MAX)$, as well as $\phi_2 = (S_{buffer} > 0)$, where $S_{buffer}$ and $S_{Mbuffer}$ indicate the size of the Slave and Master buffer components accordingly. The value of $MAX$ is considered as fixed and equal to 400. As illustrated by Figure 13 $P(\phi_1) = 1$, meaning overflow in the SBuffer is avoided, for the considered value of $MAX$. Furthermore, the probability of underflow avoidance in the Mbuffer depends on the initial playout delay ($p_1$). Specifically, in Figure 14 we can observe that for delays greater than 1430 ms $P(\phi_2) = 1$, meaning that the Master component should start the consumption of audio packets when this time duration has elapsed.

**Experiment 2: Synchronization accuracy.** The property of maintaining a bounded synchronization accuracy is defined as: $\phi_3 = (|\theta_M - \theta_S| - A < \Delta)$, where $A$ indicates a fixed offset between the Master and each computed software clock and $\Delta$ is a fixed non-negative number, denoting the resulting bound. In the first step we used several probabilistic distributions from the execution results of the application to test if the expected bound $\Delta = 1\mu$s is achieved. However, as it can be depicted by Figure 15 the achieved bound by the simulations was always above the defined bound of $1\mu$s for $A = 100\mu$s. As a second step we repeated the previous experiments, in order to estimate the best bound. Thus, we tried to estimate the smallest bound, which ensures synchronization with probability $P(\phi_3) = 1$, by repeating the previous experiment for a variety of $\Delta$ between 10$\mu$s and 80 $\mu$s. The simulations have depicted that the synchronization bound was 76 $\mu$s, as it is also observed by the execution results of the generated code in Section 4.1.
5 Conclusions
We have presented a novel approach, based on a design flow, facilitating the development of correct and operational applications for sensor network systems. It takes as input the application software and the hardware specification (communication protocol and sensor network platforms) as well as the mapping between them and constructs a system model in BIP. This model is stochastic, meaning that it can be tested, simulated and validated using the statistical model checking tool of the BIP toolset. Moreover, through the use of rapid prototyping, our approach supports the automatic code generation for the target distributed sensor network platform.
We illustrate our method through a multimedia sensor network application targeting in two paths: 1) the construction of a sensor network system model and 2) the automatic generation of correct C code for execution in the target platforms. The method is used to evaluate functional and non-functional requirements for such applications, through statistical model checking. It also exploits the advantages of code generation for deployment on the target platform and for debugging purposes. The conducted experiments focus on the buffer utilization and the synchronization accuracy of local clocks according to a common time reference in the system.
As a future work, we are considering improvements in order to decrease the relative offset between the software clock, computed in each device, according to a reference clock. Thus we are experimenting with various clock synchronization frequencies, whilst trying to keep the amount of packets in the network as
low as possible. This may as well result in a possible alternation of the clock synchronization protocol. Additionally, we focus on multimedia applications for environments supporting lower resource platforms than Linux. In this scope, Basu et al. introduced in [3] formal models for TinyOS, an evenly popular environment for the development of such applications. Although supporting communication with lower resource consumption, such systems allow the transmission of a small amount of data in each packet. Therefore, in the target multimedia applications data are often transmitted in several packets. Consequently, the network is more frequently occupied, resulting in a higher probability of collision occurrence and packet loss. In order to analyze the impact of the additional latencies in the available resources, we plan to develop a similar design flow for such systems.
References
Appendices
A Kalman filter algorithm
This clock synchronization algorithm (proposed in [8]) continuously corrects the local clock reducing its offset from the master clock. A clock is defined by a discrete model as follows:
$$\theta[n] = \sum_{k=1}^{n} \alpha[k] \tau[k] + \theta_0 + \omega[n]$$
(1)
where $\alpha$ is the clock skew, $\tau[k]$ the sampling period at the $k^{th}$ sample, $\theta_0$ the initial clock offset, and $\omega[n]$ the random measurement as well as other types of additive noise. In a sender-to-receiver synchronization, this noise consists of four factors [23]:
- the time for message construction and sender’s system overhead,
- the time to access the transmit channel,
- propagation delay,
- the time spent by the receiver to process the message.
Since $\tau[k]$ can be different, the above clock model covers uniform and non-uniform sampling. Equation (1) can be rewritten recursively as follows:
$$\theta[n] = \theta[n-1] + \alpha[n] \tau[n] + \theta[n]$$
(2)
where $\theta[n] - \theta[n-1]$ is considered as a Gaussian random variable with mean 0 and variance $\sigma_\theta^2$, as described in [7]. We assume that the clock skew $\alpha[n]$ is time-varying, that is, it can change completely from one sample to another with the optimal estimator being:
$$\hat{\alpha}[n] = \frac{\theta[n] - \theta[n-1]}{\tau[n]}$$
(3)
This variation can be modeled as a random process defined by the Equation (4):
$$\alpha[n] = \alpha[n-1] + \gamma[n]$$
(4)
where $\gamma$ is considered as a Gaussian random variable with mean 0 and variance $\sigma_\gamma^2$ indicating the noise model, as described in [8]. As the above equations are used to define the Kalman Filter algorithm, we accordingly illustrate its vector-matrix form, previously introduced in [8].
Let $\theta$ denote the master timestamp in which we add the noise delays (see Equation (1)), and $\tilde{\theta}$ the value of the synchronized clock.
$$\tilde{\theta}[n] = \sum_{k=1}^{n} \alpha[k] \tau[k] + \theta_0 \Rightarrow$$
$$\tilde{\theta}[n] = \tilde{\theta}[n-1] + \alpha[n] \tau[n]$$
(5)
Based on the Equation (4), the Kalman Filter state of the synchronized clock is defined by the Equation (6):
$$x[n] = Ax[n-1] + u[n]$$
(6)
, where $x[n] = [ ilde{\theta}[n] \ a[n]]^T$, $A = \begin{bmatrix} 1 & \tau \\ 0 & 1 \end{bmatrix}$, $u[n] = [0 \ \gamma[n]]^T$ and $\tau$ is the sampling period. The Kalman Filter observation Equation is the noisy observation of the reference clock (Equation (7)).
$$\theta[n] = \tilde{\theta}[n] + v[n] = b^T x[n] + v[n]$$ (7)
, where $b^T = [1 \ 0]$. Then, the Kalman Filter vector-matrix form is defined by the following Equations:
$$\hat{x}[n] = A\hat{x}[n-1] + G[n] (\theta[n] - b^T A\hat{x}[n-1])$$ (8)
$$S[n] = AM[n-1] A^T + C_u$$ (9)
$$M[n] = (I - G[n] b^T) S[n]$$ (10)
$$G[n] = S[n] b (\sigma_v^2 + b^T S[n] b)^{-1}$$ (11)
|
{"Source-Url": "http://www-verimag.imag.fr/TR/TR-2014-13.pdf", "len_cl100k_base": 10159, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 61775, "total-output-tokens": 13076, "length": "2e13", "weborganizer": {"__label__adult": 0.00047469139099121094, "__label__art_design": 0.0005631446838378906, "__label__crime_law": 0.0004456043243408203, "__label__education_jobs": 0.0005688667297363281, "__label__entertainment": 0.00011837482452392578, "__label__fashion_beauty": 0.00020956993103027344, "__label__finance_business": 0.00037741661071777344, "__label__food_dining": 0.00048661231994628906, "__label__games": 0.0008516311645507812, "__label__hardware": 0.00492095947265625, "__label__health": 0.0008106231689453125, "__label__history": 0.0005011558532714844, "__label__home_hobbies": 0.00015342235565185547, "__label__industrial": 0.001117706298828125, "__label__literature": 0.0002570152282714844, "__label__politics": 0.0003941059112548828, "__label__religion": 0.0006341934204101562, "__label__science_tech": 0.2174072265625, "__label__social_life": 8.89897346496582e-05, "__label__software": 0.00913238525390625, "__label__software_dev": 0.7587890625, "__label__sports_fitness": 0.0004093647003173828, "__label__transportation": 0.00121307373046875, "__label__travel": 0.00030612945556640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56738, 0.03627]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56738, 0.61106]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56738, 0.89564]], "google_gemma-3-12b-it_contains_pii": [[0, 227, false], [227, 2121, null], [2121, 5464, null], [5464, 9003, null], [9003, 14158, null], [14158, 17580, null], [17580, 19690, null], [19690, 22710, null], [22710, 24154, null], [24154, 27701, null], [27701, 32174, null], [32174, 36185, null], [36185, 37368, null], [37368, 40693, null], [40693, 42175, null], [42175, 45376, null], [45376, 47007, null], [47007, 50528, null], [50528, 53861, null], [53861, 56100, null], [56100, 56738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 227, true], [227, 2121, null], [2121, 5464, null], [5464, 9003, null], [9003, 14158, null], [14158, 17580, null], [17580, 19690, null], [19690, 22710, null], [22710, 24154, null], [24154, 27701, null], [27701, 32174, null], [32174, 36185, null], [36185, 37368, null], [37368, 40693, null], [40693, 42175, null], [42175, 45376, null], [45376, 47007, null], [47007, 50528, null], [50528, 53861, null], [53861, 56100, null], [56100, 56738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56738, null]], "pdf_page_numbers": [[0, 227, 1], [227, 2121, 2], [2121, 5464, 3], [5464, 9003, 4], [9003, 14158, 5], [14158, 17580, 6], [17580, 19690, 7], [19690, 22710, 8], [22710, 24154, 9], [24154, 27701, 10], [27701, 32174, 11], [32174, 36185, 12], [36185, 37368, 13], [37368, 40693, 14], [40693, 42175, 15], [42175, 45376, 16], [45376, 47007, 17], [47007, 50528, 18], [50528, 53861, 19], [53861, 56100, 20], [56100, 56738, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56738, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
66a926fd7853e44b21a85c0ca19b34c330bb7cd4
|
1. Introduction
Starting from the 1990s, several Agile methodologies were created that became popular, especially in the IT industry. All of them have common characteristics, which are well described by the *Manifesto for Agile Software Development* [14]. These methodologies are designed to address most popular issues in information systems development (ISD), primarily through iterative development and close, intensive communication between various project stakeholders. Because of this, the Agile methodologies of project management are often considered as having built-in methods of risk mitigation, which reduce the probability of project failure, especially if the project is carried out in a fast-changing environment.
---
*Institute of Organization and Management, Wroclaw University of Technology, ul. Smoluchowskiego 25, 50-372 Wroclaw, e-mail addresses: wojciech.walczak@pwr.edu.pl, dorota.kuchta@pwr.edu.pl*
However, it seems that there is still no clarity as to whether Agile methodologies, with their implicit mechanisms of risk mitigation, sufficiently reduce project risk by themselves, or are explicit methods of risk management still required. This discussion is especially intensive within the communities of practice, which are strongly contributing to the development of Agile methodologies but has also been noticed by researchers [5]. Disagreement between various signatories of the Manifesto for Agile Software Development is also visible [14]. Beck, the author of the Extreme Programming methodology (XP), states that XP is not sufficient for risk management in the case of safety- or security-critical projects [1], while Highsmith, the author of the Agile Project Management methodology (APM), suggests that explicit risk management process might be redundant, because the APM methodology was designed in a way to handle high-risk projects [4]. On one hand, the scientific literature proposes methods for risk management inspired by Agile software development methodologies [9], while at the same time other research identifies areas of risk management which are not covered implicitly by Agile methodologies and propose solutions as to how explicit risk management can be combined with Agile methodology [7, 8, 11, 12]. To contribute to this discussion, the authors of this paper investigated whether there are any risks caused directly by the usage of the Agile methodology of project management or which become more significant to a project when Agile methodologies are applied. If such risks can be identified, then this is evidence that an explicit process of risk management is still required, since obviously these risks are not mitigated by the Agile methodology itself.
The results of this research may be especially meaningful for any organization that is about to take a decision as to whether to introduce Agile methodologies – by either entirely replacing the methodology currently used in the organization with an Agile one or utilizing Agile methodologies in just a single project. Furthermore, the authors believe that this paper will help any organization that is using or plans to use Agile methodologies, by providing input that will allow to construct better methodology and achieve a faster and less problematic process of transitioning to Agile methodologies.
The aim of this research is to create a list of risks characteristic of Agile methodologies of project management in the context of ISD. Following the ISO 31000 standard [13], the authors define risk as the “effect of uncertainty on objectives”, where “an effect is a deviation from the expected – positive and/or negative” and a risk is “characterized by reference to potential events and consequences, or a combination of these”. All of the risks listed in this paper have their source in the Agile methodology itself or their probability/consequences are greater when these methodologies are in use. This research is not aimed at identifying the most important risks in information system development in general, thus the outcome is not a complete list of risks in Agile ISD. Since “it is widely recognised that Agile methods themselves were introduced to combat well-known risks associated with ISD project failures such as scope creep, cost overruns and schedule pressures” [2], the list of risks presented may be perceived
as a list of secondary risks to be considered when applying Agile methods. This list is not generic, since the research was done based on a limited number of projects, which were carried out in the same company – Agile projects in another company may have risks that were not captured during this research and, at the same time, some risks, which were very visible and important in the projects examined, might be irrelevant for other organizations.
2. Research methodology and background
Six projects were examined during the research. All of the projects were conducted in the same, large company operating in the telecommunications industry. The goal of each project was to customize a complex telecommunications/software product to the needs of an external customer and deploy it at the customer’s site. In each of the projects, major development and testing efforts were required. The product to be customized and technology used were the same in all the projects. However, the requirements for the customization of the product and the environment into which the product was to be introduced were unique. Although the project teams were geographically distributed, the members of the development team working on a given project were always located in the same center. In the case of three projects, the development team was located in Poland, in another two projects the development team was located in South Africa and one project had a development team located in Germany. The customers were large mobile network operators from Europe, Africa and Latin America. In all of the projects considered, the customer was located in another country than the project development team.
The methodology used for software development and project management in the examined projects was based on the most popular Agile methodologies: Scrum [10], Extreme Programming [1], and Agile Project Management [4]. Although in the case of five of the six investigated projects the project team had little or no earlier experience with Agile methodologies, the projects are considered to be Agile, since the project lifecycle was fully in line with Agile methodologies, project roles adapted to or replaced with Agile equivalents, important Agile practices were introduced and all the project team members had been thoroughly trained in Agile methods of development. The Scrum methodology was fully introduced with only minor adaptations, where the main difference to pure Scrum was the introduction of a product owner team, even if there was just one Scrum Development Team. Scrum, with 2- or 3-week Sprints, was the primary methodology used in all six projects. Additionally, the following XP and APM practices were introduced: continuous integration, sit together (the entire development team worked in the same room, only project managers and the people responsible for communication with customers or on-site activities usually worked at a dif-
different location, often in a different country), Whole Team, Informative Workspace, Stories (not in all projects), Incremental Design (only to a limited extent, since the projects were about customizing an existing product and not building a completely new one), Real Customer Involvement (only to a limited extent, not in all projects), Incremental Deployment (not in all projects) and Team Continuity. The practices from the Agile Project Management methodology that were introduced include: Project Data Sheet, Customer-Development Team Interface (the product owner team was responsible for providing information to the Development Team about the vision for a product, its required features and their priorities), Feature Cards, “Release, Milestone, and Iteration Plan”, Iteration 0 (not in all projects), Low-Cost Change, Daily Team Integration meetings, Participatory Decision Making, Daily Interaction with the Customer Team, “Product, Project, and Team Review and Adaptive Actions”. Please note that different terminology is used in different Agile methodologies, e.g. the “Product, Project, and Team Review” practice from APM is the same process as the Sprint Review and Sprint Retrospective ceremonies from Scrum. In this paper, primarily the Scrum terminology will be used and the names from other methodologies will be used only if there is no equivalent expression in the Scrum terminology.
Fig. 1. Risk breakdown structure in use (based on [6])
One-on-one, semi-structured interviews were conducted with key project stakeholders after the end of the project: project managers, product owners, scrum masters,
line managers and experienced development team members (who had played the role of a project leader in earlier projects). The questions aimed to identify the strengths and weaknesses of the observed projects related to the Agile methodology, as well as threats and opportunities for future Agile projects. The organization in which the projects were carried out had in the past a mature methodology based on the waterfall model and the interviewees mostly provided a comparison of the newly introduced Agile methodology with experiences from projects conducted earlier using the waterfall approach. All the interviews were recorded and a detailed analysis of the input obtained was done based on the recorded material. The interviews were supplemented with an analysis of the project documents (including Agile artifacts) and active observation.
In order to structure the information gathered using the method described above, a Risk Breakdown Structure (RBS) is used, which is created based on the classes and elements of the SEI risk taxonomy [6] (Fig. 1). The following sections of the paper are structured according to the RBS.
3. Development cycle risks
The first class of risks covers all the opportunities and threats that are related to constructing the product for which the project was established. Therefore, these risks are connected with activities that have a direct effect on the product.
3.1. Requirements risks
This section covers risks that are related to the requirements for the product, i.e. risks that may occur during analysis, definition and management of these requirements, as well as risks that have an effect on the degree to which the end product satisfies the needs of the customer and users.
Agile methodologies are designed in such a way to allow very cheap changes to the scope of a project which is especially useful in the case of projects with unpredictable requirements, or in the case of continuously evolving requirements. Easy/cheap changes in the requirements and scope of a project are considered to be an opportunity, since several preconditions need to be met in order to make them happen. From the perspective of the development process, it is sufficient to change the content of the Product Backlog. Although, in the case of some projects (e.g. fixed price), there might still be a need to use some of the traditional techniques of scope management, in these cases the use of Agile methodologies introduces a chance that the introduction of any formal request for a change will be much cheaper and faster.
To maximize this opportunity, the Agile methodology should be introduced completely into a project and the project manager should actively interact with the product
owner (possibly he could take this role) and manage the scope of the project through
the Product Backlog. Also, detailed clarifications on the items in the Product Backlog
and design activities should be made as late as possible in the project – in the case of
an Agile project, this means that they have to be finished just before the iteration in
which the item is to be delivered. The customer needs to be educated in Agile methods
and willing to take an active role in the project during the Sprints.
3.2. Design risks
The risks related to design that were observed in the projects investigated were not
cased by the Agile methodology, or transition to the methodology, neither were there
any reasons to believe that their severity or probability would be different if another
methodology were used. This is caused primarily by keeping the same approach to
design in Agile projects as earlier when using the waterfall methodology – the only
difference was the point in time at which the design was prepared: instead of having
a single design phase early in the project, the design of features was done in parallel to
the implementation of other features. Another reason for this is that the projects inves-
tigated were about customizing a product to a customer’s needs, therefore the overall
architecture and design of the product were known upfront and during the project the
design tasks focused only on the features to be customized or added to the product.
3.3. Implementation risks
As in the case of design risks, no implementation risks characteristic of the Agile
methodology were found.
3.4. Test and evaluation risks
It is especially important to tackle risks related to verification or validation before
the start of Sprints, since reactive response to the effects of such risks might be very
expensive. In this category, the risk of neglecting continuous integration (CI) was
identified.
To keep the development efforts at a stable level, a team has to pay great attention
to continuous integration and ensure that the product under development passes all the
tests based on requirements implemented in earlier Sprints. To achieve this, the team
has to be self-disciplined and also see the value of investing in continuous integration
(introduction of the continuous integration system and its maintenance in every Sprint
require additional efforts from the project team). Neglecting continuous integration
will result in increasing the effort required to make the required progress in each subsequent Sprint, thus the level of advancement of the project becomes more unclear (the end date of the project becomes more unpredictable) and also the possibility of delivering early releases of the working product to the customer is lost.
Possible responses to such risk are training the project team and project management so that they recognize the importance of continuous integration, using a continuous integration and test system that is cheap to introduce and maintain, adding criteria about continuous integration to the Definition of Done and ensuring that they are strictly followed. If a Sprint 0 is introduced into the project, then establishing the CI environment should definitely be included in it.
4. Development environment risks
As noticed in [6], the process/methodology of software development and the environment in which the project is carried out may also be sources of risks. These are the most important risks from the perspective of the goal of this paper.
4.1. Development process risks
The Agile methodology used in the projects investigated is considered to be a methodology for both project management and software development at the same time. Risks that are caused by the Agile methodology itself can be found in this section. The risks discussed here are either inherent to the Agile approach or have their source in the organizational transition towards the Agile methodology.
4.1.1. Inefficient Scrum meetings, ineffective Scrum roles
In an Agile project, many different meetings must be conducted regularly in order to make the methodology and project work. The entire project team takes part in them, thus a significant amount of project time and effort is spent just on these meetings. Also, using the Scrum methodology, new project roles are introduced – the role of Scrum Master and the Product Owner are not similar to any of the roles from traditional methodologies of project management. Although some project roles might disappear with the introduction of the Agile methodology, it was observed that often the cost of the Scrum roles introduced was higher than the cost of the project roles eliminated after adoption of Agile. The roles of Scrum Master, Product Owner, and time spent on meetings are treated here as process overheads, since they do not directly generate value for the customer. Most of the interviewees recognized the importance of the Scrum meetings and roles. However, they pointed out that the overheads were
too great. This issue was noticed in each of the projects investigated but the amount of unproductive hours varied between the projects. Inefficient meetings or ineffective roles may result in project delays and cost overruns.
Two different approaches can be taken to mitigate this risk: the process may be tailored to reduce the amount of time spent on unproductive tasks or meetings and roles can be optimized. The first approach is justified in some cases but brings a secondary risk of dysfunctional/incomplete implementation of the Agile methodology. The second approach might be implemented by establishing fixed rules for the meetings (e.g. fixed agendas for every Scrum meeting and time-boxing), providing the team with strong moderation, or training and coaching about the Agile methodologies. The assignment of the Scrum roles is another possible source of risk – in particular, acquiring an experienced Scrum Master or investing in improving the skills of a novice Scrum Master is important, since an unskilled person in this role not only consumes more effort in his/her own role but also increases the amount of unproductive time spent by the rest of the team.
4.1.2. The whole team working collectively to reach the project goal
The Agile methodologies increase the likelihood of having the entire project team working collectively towards the project goal. Agile practices such as a collocated project team, highly visible Product and Sprint Backlogs, and face to face Scrum meetings with all the team members (especially Daily Scrum meetings) establish effective distribution of information on the progress of a project between team members, remove communication barriers and foster good teamwork. At any point of time, all the team members have the same goal, while in traditional methodologies of software development there are separate sub-teams often only interested in reaching their own milestones. The Agile methodologies also eliminate the risk of conflict between groups of people with different skills who in traditional projects would be in different sub-teams. In the Agile projects investigated, different observations have been made on how the opportunity of the whole team working collectively may materialize. Some team members build a wider set of skills to be able to work on the most important current tasks, instead of limiting themselves to only the portion of tasks in which they were specialized at the beginning of the project. The teams also established ad hoc task forces consisting of team members with different expertise to tackle difficult problems. In all of the projects, team members increased awareness and understanding of the work outside of their own specialization.
To exploit this opportunity, the following approach to personnel management within the organization should be adapted – evaluation of employee performance or setting periodical targets for employees must take into consideration and appreciate the impact of an individual on the overall goals of a project and should not take the job description and official work responsibilities as the primary criteria. Also, a portion of
the team’s capacity could be reserved for the team members to learn new skills. Another response would be ensuring that the Agile practices involving an entire team are properly implemented.
4.1.3. Team not able to self-organize and make group decisions
In traditional project management, it is not required for the project team to self-organize. Therefore, any risk related to self-organization is characteristic of Agile projects. The team might not be able to self-organize due to internal conflicts within the project team which prevent the team from reaching a consensus. Other reasons for such problems might be the team being overwhelmed with the amount of information to be processed, and the number of decisions to be taken or the very narrow specializations of team members which make it impossible/difficult for individuals to understand the big picture. It is worth noticing that the probability and impact of conflict between team members are greater when the project team adopts the Agile methodology, since in traditional project management there is always a single person responsible for decisions and the progress of the project does not get blocked if the project team is not able to reach a consensus in their discussions. This risk is more likely in organizations where the project team is established for a single project and disbanded after it, because people of different backgrounds, habits and beliefs are expected to make decisions related to a common mode of operation, estimate requirements, and plan a Sprint right from the beginning of the project.
If such a risk materializes, then the problem might be difficult to solve, since the development team might reject suggestions from outside the team using self-organization as an excuse. If the team is not able to set common rules of working on its own, or not able to enforce that team members conform to these rules, the reaction could be to replace some of the development team members or take the privilege of self-organization away from the team (i.e. establish a formal leader). However, since in the mid or long term the benefits from self-organization (e.g. increase in team performance) might outweigh the problems described earlier (which are most likely to occur at the beginning of a project), the best reactive response to such a risk might be helping the team to go through the storming stage of team dynamics.
Possible proactive responses to such a risk are: team building activities at the beginning of the project, providing thorough training in Agile methods to all team members, acquisition of an experienced Scrum Master, Agile coaching for the team during at least the first several Sprints of the project and limiting the self-organization of the team (e.g. by identifying ground rules, which cannot be changed without the approval of the project manager or Scrum Master). The last response reduces not only the negative risk described in this paragraph but also the expected benefits originating from self-organization.
4.1.4. Wrong team decisions
In Agile methodologies, it is assumed that the group decisions made by a team are better than the decisions of individuals. However, there is a risk that the opposite case will occur. In Agile methodologies, the development team takes over some of the managerial responsibilities – they do not only execute the project tasks but also control task execution, organize their work and make key decisions on which the future of the project depends. In traditional project management, decisions in these areas were reserved to project managers, project leaders and team managers, who were trained and experienced in managerial work. In an Agile project, wrong decisions might be made when the team does not have a full overview of the problem while making a decision, or when a decision is not taken by the team members based on merit but on other criteria (e.g. the popularity or determination of individual team members). Teams often use the method of democratic voting to speed up the decision-making process, which introduces the risk that the options preferred by a minority which is most knowledgeable on a subject will lose to those favored by the majority.
As a response to such a risk, it has to be ensured that the team consists of people who have not only technical competence but also an end-to-end understanding of the entire project. This can be achieved by including former formal leaders into the development team (many of the formal project or technical leadership roles are expected to disappear with the introduction of the Agile methodology). Furthermore, it is required that all the team members participate in the discussions and are motivated to actively contribute to them (engineers are not always keen to take part in the discussions and former project leaders might feel demoted because of losing their formal leadership role). Other responses to such a risk include training the team on Agile methods (to achieve an understanding of the additional responsibilities of a team when following the Agile methodologies). The moderation of discussions is equally important, so that the merits of an opinion are not lost and that every team member has equal opportunity to present his/her arguments – this can be achieved by hiring an experienced Scrum master/meeting facilitator.
4.1.5. Misuse of self-organization to stop/revert the adoption of the Agile methodology
Self-organization, being an integral concept in any of the Agile methodologies, may be misused by the project team and used against the adoption of Agile methodologies, especially if the project team members are new to the concept of Agile software development. In the course of an Agile project, the team meets at the end of every Sprint to discuss the way in which they work and introduce improvements. The first such meeting occurs after the first Sprint, i.e. not more than one month after the beginning of the project. During this very short time, the project team often has no chance to observe the benefits of using the Agile methodologies in practice but they may already be exposed to some of the elements of the methodology that may bring
arguments against it (e.g. long, ineffective meetings – a risk discussed separately). Resistance to change, the first, possibly negative, impressions about Agile processes, and/or little experience and understanding of Agile methods may result in the team deciding to reverse adoption of Agile methodologies or counteract the methodology unintentionally (e.g. by removing crucial elements of the methodology while trying to improve the process).
Such risks can be mitigated by providing solid training to project team members about Agile methodologies or ensuring that the role of Scrum Master will be taken by an experienced person with authority in the project team. Limiting the self-organization of the project team (as mentioned earlier in the paper) is another possible response. Elimination of risk by resigning from Agile methodologies (before they are introduced to the team) is also an option worth considering. Instead of complete adoption of the Agile methodology, a smaller change might be introduced which would cause less resistance. Further steps in the adoption of the Agile methodology might be taken in later projects. The last response to such risk may eliminate many of the benefits that the project would gain by complete adoption of the Agile methodology but it also reduces the risk associated with the next project handled by the team.
4.2. Development system risks
There is a group of risks related to the tools used in a project, which includes both software tools, as well as hardware (also the testing infrastructure). With the introduction of Agile methodology, there are different requirements regarding these tools – the risks mentioned below cover scenarios where some of these requirements are not fulfilled.
4.2.1. A lack of or limited compatibility of tools with Agile practices
Development and testing tools that work well in the case of traditional methodologies of project management may have limited or no compatibility with Agile engineering practices, such as test driven development (TDD) or continuous integration. Test tools may not have the flexibility to write test cases before implementation is started (TDD) or in parallel to the implementation of the relevant functionalities (short iterations force this approach to development). Moreover, the run time of the testing process might not provide quick enough feedback to the development process. Finally, the efforts to run regression tests every Sprint might be too high, e.g. in the case where test cases or a test suite require time consuming modifications after every change in the implementation of the system. The problems mentioned above have been observed in the projects investigated, because some of the proprietary tools (crucial to all of the projects) had serious limitations in terms of their compatibility with Agile practices.
and also there were no plans in the organization to make significant investments in such tools.
All the risks related to technology and development/testing tools should be considered before the start of the first Sprint in the project. If it is not possible to run regression tests at low cost in each Sprint, automate the execution of test cases or implement and test a single feature in a single Sprint (not longer than one month), then the technology and tools used are not compatible with Agile methodologies and it has to be considered whether a change in technology/tools is possible – if not then the project should not be driven according to the Agile approach.
4.2.2. Missing infrastructure at the customer’s site
Any infrastructure which would enable receiving early increment releases of the product from the vendor is required at the customer’s site to fully benefit from Agile development processes and also to provide valuable feedback to the development team. However, such infrastructure might be missing if the software being implemented in the project requires dedicated hardware and/or complex integration of hardware and network components – in such cases installation of the infrastructure at the customer’s site, together with the time required for shipment of the hardware from the HW vendor, may be a separate project planned to last even several months. To mitigate this risk, delivery of the test platform to a customer’s premises should be planned in close connection to the software development project.
4.3. Management process risks
As mentioned earlier, the Agile methodology is also treated as a methodology of project management. The source of the risks described below is the methodology itself.
4.3.1. Better project monitoring and control
Improved control over a project is possible thanks to the high visibility of the state of the project at any moment of time, so that any discrepancy with the plan can be immediately detected. Therefore, the project manager is informed very early on in a project where there is a high risk that one or some of the project constraints cannot be achieved, leaving enough time for the project manager to react. The project manager can use reliable forecasts (the accuracy increases with each Sprint) of the project’s end date based on metrics like team velocity (measured empirically), size of the backlog (always expressed by up to date estimates).
To maximize this opportunity, it has to be ensured that artifacts of the Agile methodology like burn-down charts or backlogs are properly implemented and are based on
reliable information. As a precondition, the Product Backlog needs to be up to date at all times and also the estimates of requirements need to be accurate. To keep the Product Backlog up to date, the team needs to reserve a portion of their time to clarify, refine and estimate (or re-estimate) the requirements. Introduction of the Definition of Done can help with obtaining the current state of the Product Backlog, since there is no doubt how much work in the project has been already done and how much is pending.
4.3.2. Shorter project duration from a customer’s perspective
Looking at a project from the perspective of the customer, the project does not end when the product is finished and accepted – the product requires integration into the customer’s infrastructure (and also with other systems), configuration, inputting the product with data (migration from replaced products) and training users of the system. Agile methodologies allow early pre-releases of working product increments, which may be shipped to the customer by the vendor and enable a very early start for some of these activities at a customer’s site. Also, final acceptance of the product at the end of the project lasts a much shorter time, since the vast majority of faults have been addressed earlier and also disagreements about requirements/functionality may well have been clarified at earlier stages of the project. Thus, there is an opportunity that the overall project, as it is seen by the customer, will be finished much faster (even if the vendor delivers the product at the same time as it would have done using a traditional methodology).
To enhance this opportunity, the customer needs to be educated about these possibilities and also a good communication plan has to be established which fosters frequent, direct and honest communication between project teams on the vendor’s and customer’s sides. Also, the efforts of various specialists from the customer organization should be distributed differently over time and it should be ensured that the proper level of resources on the customer’s side are available at any point of time in the project.
4.3.3. The methodology of project management weighs against individual career plans
The Agile methodologies follow the concept of self-organizing and self-managing teams, which formally have flat structures. The traditional roles of project leader or team leader do not exist in them, which limits the possibilities for the future career paths of team members. Employees with plans to build their own career in the direction of managerial positions, might find it more difficult to realize their own ambitions while in an Agile project, since the structure of a project team lacks formal positions of middle management. In the projects investigated this problem was very noticeable, since some of the project leaders lost their formal position of leadership and others had personal development plans, which became obsolete after the introduction of Agile methodology, although they had been earlier agreed with managers. Such individuals
perceive the adoption of Agile as being against their own personal interest and might lose motivation to work.
All the people joining such a team should be informed from the very beginning about the structure of the team in an Agile project. If an employee was hired before the transition to Agile, then a considerable amount of time should be spent on discussing new career possibilities, otherwise the organization might lose some of its most experienced and dedicated team members. During the adoption of Agile, a transition period should be considered, while people formerly having project leadership roles transfer to new Agile roles.
4.4. Work environment risks
Risks to an Agile project may also come from the work environment in which the project is executed – if it is not adapted to Agile methodologies, then it is a source of major risks to the project.
4.4.1. Organization does not follow the rules of Agile
When Agile methodologies are introduced for the first time into an organization, then there will be organizational obstacles to the adoption of Agile. The organization might expect similar behavior from the project team and individual employees as in the case of traditional projects. Such expectations may be harmful to the adoption of Agile and, as a result, also to the project – e.g. sharing employees between projects, a single person’s responsibility for an entire team, progress reports not being adapted to Agile processes, a preference for vertical communication across the organization (escalations), the possibility of removing or replacing project team members at any time (e.g. when a project with a higher priority starts), distracting team members with tasks not related to the ongoing Sprint, use of project management/reporting tools for an Agile project without any adaptations. Also, the organizational structure may be problematic for the project. If the project team members have different line managers or the organizational structure changes a lot, then the integrity of a project team is endangered, while good team work is a key element of the Agile methodologies and is a precondition for the benefits and opportunities resulting from this way of working.
The risks related to the environment around the project may be reduced in two ways: the first one is to mock the typical communication interfaces of the traditional mode of development between the project team and the rest of the organization (fulfill all of the formal requirements of traditional project monitoring and reporting). This response slows down the progress of a project and also does not allow the organization to fully benefit from the introduction of Agile methodology in this pro-
Risks characteristic of Agile project management methodologies and responses to them
Another reaction to such risks is to adopt Agile not only at a project level but also across the entire origination. Since the transformation of the organization might take longer than the project, the first changes to the organization should focus on communication with the project team, the assignment of people to projects (stabilizing the project teams) and also establishing support for the project teams in adopting Agile methods or in solving impediments that the project team encounters during project work.
4.4.2. Dependence on other teams
Cross-functional project teams, which can implement any of the customer requirements independently on any other team, are promoted by Agile methodologies and these methodologies are built on the assumption that such a project team has full control over any of the project’s deliverables. Nevertheless, in reality such a situation may happen only when a project involves building a new product from scratch. Nowadays, during the development of a new product, very often pre-fabricated components are used to save time and costs, and also the team uses various software tools, which might be developed by an external company or in house. In the organization carrying out the projects investigated, some of the components of the end solution but also proprietary tools were developed by other teams and the responsibility for these components/tools stayed with them. This dependence brought different kinds of issues to all of the six projects investigated and similar risks may be relevant to any projects, where the project team is dependent on deliverables provided by a different team.
Such risks include: faulty deliverables provided to the project team, resulting in unexpected delays in the project due to the time required for correcting faults, inaccurate estimation of the project because of inappropriate specification of deliverables to be provided by other teams, lack of the possibility for the project team to implement some of the customer’s requirements on its own if a deliverable that has to be modified is the responsibility of another team, and risk of misunderstanding a customer’s requirements by the team which provides the deliverables to the project (due to a lack of direct contact with the customer). The responses to these risks should focus on the hosting organization of the project and should aim to reduce or eliminate the dependences between the project team and any other teams or establishing close cooperation and effective communication with any relevant teams.
5. Programmatic risks
This section contains mostly extrinsic risks – the project manager typically has no direct control over these risks.
5.1. Resource risks
All of the six projects investigated were fixed price projects with a fixed deadline. The project team, including the project manager, had no or limited influence on a number of resources, such as amount of time, number of people available, quality of team (adequacy of skills and experience), budget or available facilities (including those on the customer’s side). However, it turned out that with the introduction of Agile methodologies, there are several opportunities, which can be enhanced even during the project, while using the waterfall model these opportunities are not present to the same extent.
5.1.1. Increasing team effectiveness
The Agile methodologies give the opportunity that the performance of a team will improve during a project. This improvement may be much greater than would be expected in the case of a traditional methodology. The Agile methodology provides opportunities for the project team to discuss the way a project is conducted (e.g. during a Sprint Retrospective) and often the project team finds areas for improvement. It has been noticed that while in some of the projects investigated a big improvement in the team’s effectiveness was observed, other project teams performed at a similar level from the first Sprint till the last one.
To make the most of this opportunity, the Scrum development team should be given freedom to decide and experiment about their ways of working. This process of improvement should be driven from inside the Scrum Development Team but it can be facilitated by measuring team performance and making the progress of a project and results visible to each individual in the team. Large emphasis should be placed on the Sprint Retrospective meeting – the project team needs to be aware of the importance of the meeting and conduct it effectively. Some percentage of the team’s capacity and project budget should be reserved for improvement-related activities during a Sprint. Attempts to save time and money in the short term (e.g. by resigning from a retrospective meeting or abandoning agreed means of improvement) may be more expensive in the mid and long term, since the project team will not increase its efficiency.
5.1.2. Utilization of full team capacity
Balancing the workload between team members using a traditional methodology of project management is done through the project schedule and requires the engagement of project managers/project leaders. In the case of an Agile project this is done by the project team itself – a well-informed team is able to react immediately to abnormalities in the execution of a task and adapt the assignment of all the team members to tasks so that none of them are ever idle during a project. Therefore, project tasks and their distribution across project team members might be optimized to a much
higher degree than would be possible in the traditional, plan-driven approach. Also, low-performing team members become visible when the Agile practice of Daily Scrum meetings/Daily Stand-up meetings is in place – this provides an opportunity to reduce the wasted capacity of the team.
To make the most of this opportunity, the Agile practices at Sprint level have to be well implemented. Additionally, the team members should have full freedom in selecting any tasks from the Sprint backlog that they want, even if this results in a much longer duration of a given task, since team members have first to learn new skills. Undertaking actions to expand the area of specialization of each team member can also be treated as a separate response to such a risk.
5.1.3. Insufficient knowledge/understanding of Agile methodology
In each of the projects investigated, an enabling plan was in place, which consisted of training and support from an Agile coach. This enabling was normally provided during the first two months of the project. However, two of the projects did not follow this pattern. In one of the projects, the enabling was delivered with a delay, and in the other, only a reduced scope of enabling was provided to the team – both of these project teams suffered from severe problems because some Agile practices were misinterpreted. The problems observed include conflicts within a team due to different interpretations of the methodology, not following the methodology and falling into a chaotic way of working, low morale in the project team. It is worth mentioning that it has been noticed that some individuals have an aversion to Agile methodologies even before they gain proper knowledge and understanding of this approach – in such cases, the attitude of an individual could be a major impediment in the learning process. It has been observed that even a single team member, who is not sufficiently educated in Agile methodologies, might cause serious disturbance to the whole team, especially if this is a person with a strong position in the team (e.g. due to technical competence and experience) and has a strongly skeptical attitude towards the Agile approach.
These examples illustrate the risks of a lack of knowledge and deep understanding of Agile methodologies by the project team. Responses to such risks include educational activities and support from an experienced Agile coach – a person who can explain the meaning of and reasons for various Agile practices to the team as they are searching for possible improvements to their ways of working and can also provide the team with good advice on how other teams implement the Agile process. Even if a project team receives good training in the Agile methodologies, it has to be kept in mind that the progress of a project will be strongly affected during the first iterations, since the team is discovering its own way of working and adapting Agile practices to their project. If training and expert support cannot be provided, then, as the projects investigated in this research show, introduction of Agile methodology may not bring
any benefits to the project or can even be harmful by introducing chaos, conflict and low morale into the project team.
5.2. Contract risks
Time and material contracts are considered to be optimal for Agile projects. In the case of the projects investigated, all of them were fixed price contracts. There were two primary reasons for this: the customers preferred this type of contract but also the vendor company had limited experience with selling time and material projects. A fixed price contract is an impediment in becoming fully Agile, since it is safer for the vendor to defend the initial assumptions of a project (in order to stay within the project constraints) and also a formal change in the control system is required, even though the actual software development is done according to Agile methodology. With a time and material or cost-based contract, the customer would gain additional benefits from Agile methodology, since the role of Product Owner could be assigned to a customer’s representative and in this way the customer would gain the possibility of changing the direction/adjusting the scope of the project at any point of time. In a fixed-price project, assigning the role of Product Owner has to be done within the vendor company, otherwise there is a very high risk of scope creep. To reduce the risks which have their source in a fixed-price contract, it is recommended to introduce flexibility in the scope of a project by making appropriate adaptations to the contractual terms (freezing the definition of the scope, especially at a detailed level, should be avoided) or breaking down the contract into several smaller ones (e.g. ordering groups of product features, instead of a single order for the whole product).
5.3. Program interface risks
Based on the projects investigated, it can be said that interactions between the vendor company and the customer are a major source of risks in the case of Agile projects. In each of the projects investigated, the customer was a company of comparable size to the vendor but each customer behaved differently. According to Agile methodologies, close cooperation with the customer is required at all stages of projects. However, they do not explain how to assure that it is at the right level. Such risks to a project, depending on the approach of the customer to the project, are mentioned below:
5.3.1. Customer interacts in a “traditional” way
Some of the customers treated an Agile project in the same way as any other projects. Such a customer was mostly active at the beginning of the project, when the contract is signed and requirements are specified, and at the end of the project, when
the acceptance of the project deliverables is made. The customer had little or no interest in the middle period of the project. The project team at the vendor’s site had to make assumptions during the project, instead of contacting the customer directly and using the feedback from him – this is subject to similar risks as in the case of the waterfall model, i.e. if the assumptions are wrong, then the resulting product will not meet the customer’s expectations. Lack of real customer involvement during the iterations introduces overheads into the project (e.g. the necessity of translating the requirement specification document into the Product Backlog) and disables the biggest advantages of Agile methodologies.
To mitigate such risk, the customer should be informed in advance about the desired level of interaction with him, so that work on the customer’s side could be organized to meet the expectations of the vendor. Points of interaction during a Sprint, together with the required presence of project stakeholders, might also be explicitly mentioned in the project contract.
5.3.2. Customer not able to provide valuable feedback in time
Because of the size and complexity of the customer organization or limited interest in the middle period of the project, the customer organization might be unable to provide valuable and complete feedback to the project team, e.g. responses to inquiries about requirements and their priorities, feedback about the increments shipped to the customer or demonstrated at the end of a given Sprint. Feedback from the customer or responses to inquiries might not be delivered in time to the vendor. The project team works in very short Sprints and prior to the start of the next Sprint, a significant portion of the requirements might need to be clarified, so that the development team can plan their next Sprint. However, in the case where a large number of stakeholders are involved in projects on the customer’s side, gathering the details about these requirements and their priorities might require more time than it takes a well-functioning development team to implement these requirements.
To mitigate this risk, the customer needs to be notified in advance about the required level of communication, so that a communication plan can be established within the customer organization. Furthermore, a single person should be identified on the customer’s side to serve as a contact person for the clarification of requirements – this should be someone capable of discussing the requirements on a technical level.
6. Summary and conclusions
Based on a series of interviews with project team members from 6 Agile projects, who also had a lot of experience in the waterfall model, it was possible to identify a list of risks that are either directly caused by the Agile methodology or require more
attention when this methodology is used. These risks include threats to the project, as well as opportunities, extrinsic and intrinsic risks. Dealing with some of these risks is considered critical to project success but none of them are appropriately managed by the Agile methodology itself. Therefore, it can be concluded that the implicit risk management, which is built into the Agile methodologies, is not sufficient and explicit risk management processes should also be applied to an Agile project.
A significant portion of the identified risks are related to the introduction of the Agile methodology as a new process for software development in an organization, or to the imperfect implementation of Agile methodology. While such risks are most likely to occur in an organization, which does not have a lot of experience in Agile development, they are still valid for any Agile project. The Agile methodologies are empirical, which means that only a generic framework of the methodology is codified and the methodology actually used differs from team to team and from project to project. Therefore, every time a new project starts, a new project team is established, the team has to go through the same process of discovering the methodology – during this period all of the risks identified are likely to be present. Also, during a project the team and the methodology might be destabilized (in the projects investigated this was usually connected to changes in the project team composition), so these risks are present during the whole duration of the project.
The results described in this article cannot be generalized, because all of the projects investigated were carried out in the same company and all of the projects involved customization of the same product. The research proves that, in this concrete company and with this type of project, there were project risks which were not mitigated by the Agile methodology. The research was done based on projects carried out in different countries, the various customers and project teams came from various cultures but further research is required based on a larger number of projects, carried out in various companies and fields.
References
Received 11 May 2013
Accepted 18 December 2013
|
{"Source-Url": "http://orduser.pwr.wroc.pl/DownloadFile.aspx?aid=1065", "len_cl100k_base": 9949, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 45088, "total-output-tokens": 11498, "length": "2e13", "weborganizer": {"__label__adult": 0.0004031658172607422, "__label__art_design": 0.0004107952117919922, "__label__crime_law": 0.0003631114959716797, "__label__education_jobs": 0.0035724639892578125, "__label__entertainment": 4.76837158203125e-05, "__label__fashion_beauty": 0.00018131732940673828, "__label__finance_business": 0.0007123947143554688, "__label__food_dining": 0.0003402233123779297, "__label__games": 0.0005049705505371094, "__label__hardware": 0.0004839897155761719, "__label__health": 0.00048422813415527344, "__label__history": 0.0002837181091308594, "__label__home_hobbies": 9.965896606445312e-05, "__label__industrial": 0.0004549026489257813, "__label__literature": 0.0002701282501220703, "__label__politics": 0.0002157688140869141, "__label__religion": 0.0004124641418457031, "__label__science_tech": 0.00463104248046875, "__label__social_life": 0.000110626220703125, "__label__software": 0.0042266845703125, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0003769397735595703, "__label__transportation": 0.0004243850708007813, "__label__travel": 0.0002180337905883789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56131, 0.02291]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56131, 0.26536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56131, 0.95967]], "google_gemma-3-12b-it_contains_pii": [[0, 925, false], [925, 4345, null], [4345, 7281, null], [7281, 8906, null], [8906, 11630, null], [11630, 14060, null], [14060, 16629, null], [16629, 19777, null], [19777, 22803, null], [22803, 25969, null], [25969, 28817, null], [28817, 31412, null], [31412, 34501, null], [34501, 37207, null], [37207, 39983, null], [39983, 42821, null], [42821, 45937, null], [45937, 48614, null], [48614, 51463, null], [51463, 54354, null], [54354, 56131, null]], "google_gemma-3-12b-it_is_public_document": [[0, 925, true], [925, 4345, null], [4345, 7281, null], [7281, 8906, null], [8906, 11630, null], [11630, 14060, null], [14060, 16629, null], [16629, 19777, null], [19777, 22803, null], [22803, 25969, null], [25969, 28817, null], [28817, 31412, null], [31412, 34501, null], [34501, 37207, null], [37207, 39983, null], [39983, 42821, null], [42821, 45937, null], [45937, 48614, null], [48614, 51463, null], [51463, 54354, null], [54354, 56131, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56131, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56131, null]], "pdf_page_numbers": [[0, 925, 1], [925, 4345, 2], [4345, 7281, 3], [7281, 8906, 4], [8906, 11630, 5], [11630, 14060, 6], [14060, 16629, 7], [16629, 19777, 8], [19777, 22803, 9], [22803, 25969, 10], [25969, 28817, 11], [28817, 31412, 12], [31412, 34501, 13], [34501, 37207, 14], [37207, 39983, 15], [39983, 42821, 16], [42821, 45937, 17], [45937, 48614, 18], [48614, 51463, 19], [51463, 54354, 20], [54354, 56131, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56131, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
cca1f1aeb21a5ae408c5cdf388c141445a6a5bc4
|
Data Propagation Delay Constraints in Multi-Rate Systems – Deadlines vs. Job-Level Dependencies
Tobias Klaus*, Florian Franzmann*, Matthias Becker†, Peter Ulbrich*
*Friedrich-Alexander University Erlangen-Nürnberg (FAU), Distributed Systems and Operating Systems
Email: {klaus, franzmann, ulbrich}@cs.fau.de
†KTH Royal Institute of Technology, Electronics and Embedded Systems, School of ICT
Email: mabecker@kth.se
ABSTRACT
Many industrial areas are faced with a continuous increase in system complexity, while systems need to satisfy stringent timing requirements, which are traditionally based on the tasks’ local deadlines. However, correct functionality is subject to high-level timing requirements on data propagation through a set of semantically related tasks. Since distributed concurrent engineering is often used to deal with the complexity of such systems, violations of data propagation delay constraints are only visible at late development stages, where changes in system design become increasingly expensive.
In this paper, we leverage job-level dependencies (JLDs) that can be specified at early development stages to guarantee data propagation delay constraints. Therefore, we present an approach that extends the Real-Time Systems Compiler to enforce the JLDs in actual multicore schedules. This strategy enables us to perform extensive evaluations of the effectiveness of JLDs in combination with contemporary allocation and scheduling algorithms, where we observed schedulability improvements of up to 42%. Additionally, we identified the effect of the number of available cores on the data propagation of data through a chain of tasks (so-called cause-effect chains) [12, 15, 24, 41]. Challenges arise, as the different tasks are often independently triggered, possibly at different periods, which leads to complex over- and under-sampling situations that make their timing analysis cumbersome.
In the automotive industry, the complexity and number of software functions that are integrated with a modern car are steadily increasing. As the software development process is driven by the distributed concurrent engineering paradigm [34], different functionality is developed by different vendors and integrated into the system at a later stage by the original equipment manufacturer (OEM). This isolation means that during the software development process detailed information about the hardware platform or other software applications that will share the same platform are unknown. While timing analysis methods are available at the implementation level [12, 19, 28] (where all functionality is integrated, and complete system information is available) vendors cannot directly verify data propagation delay constraints during the development process as information that is required by these timing analysis engines is not available. Hence, if these timing constraints are violated this is typically detected only at the implementation level, late in the development process. These violations can increase design costs significantly, as the cost of design changes massively increases with each development level [26, 39]. One approach to circumvent this challenge is proposed in [7, 8], where methods are introduced that allow for translating the timing constraints on the data propagation into precedence constraints of a selected task’s jobs, expressed as job-level dependencies (JLDs). This transformation is agnostic of the concrete hardware platform and only requires knowledge about the tasks that are involved in a particular cause-effect chain. Though the theoretic approach of JLDs is sound and the associated guarantees can be trusted the question remains how they affect scheduling at the end of the development cycle and how close their estimate is to the “real” maximal data-ages of specific schedules. Here the Real-Time Systems Compiler (RTSC) [37] comes into play as it bridges the gap between high-level system analysis performed in [7, 8] and concrete schedules aimed at a specific real-time operating systems (RTOSes) and hardware platform. This process is done by automatic application of contemporary allocation and scheduling algorithms without further human interference. This work presents a study of the interplay between different allocation and scheduling algorithms on a real implementation and the generated JLDs to meet data propagation delay constraints.
Contributions: In this work, we investigate the challenges of integrating the data propagation delay constraints into practical implementations by the RTSC [37]. With the RTSC, different task-allocation and scheduling methods can be applied to generate static schedules. In general, scheduling methods do not consider data propagation delay constraints in their decision process. These
1 INTRODUCTION
The majority of embedded applications is subject to strict timing constraints. Here, not only the correctness of the computed result is of importance but also their availability at the correct time. The main focus typically lies on the local deadline of individual periodic tasks that are scheduled by an operating system. However, many application domains require further timing guarantees on the
delays not only depend on the tasks that are involved in the cause-effect chain, but also on the actual execution order of the individual task’s jobs which results from the applied scheduling algorithm, as well as on the allocation of tasks to cores [12]. Consequently, different allocation and scheduling algorithms can yield various data propagation delays and that a scheduling algorithm that performs well under consideration of task-local deadlines may experience degraded performance when additional data propagation delay constraints are imposed on the system.
JLDs can be generated agnostic of the underlying hardware platform and scheduling algorithm [7, 8]. However, due to this abstract system knowledge, generated JLD sets that, in theory, always result in data propagation delays smaller than the constraints, might not be schedulable on a concrete platform. In this case, the applied scheduling algorithm does not find a valid schedule under consideration of the JLDs, or the number of available processing cores is not sufficient.
An extension of the RTSC is presented that considers JLDs as additional scheduling constraints. For this configuration, system configurations can be generated that utilize a time-triggered backend based on the Linux Testbed for Multiprocessor Scheduling in Real-Time Systems (LitmusRT). Extensive evaluations are performed that compare generated static schedules (based on several heuristics, as well as optimal algorithms) with and without the extension for JLDs. We show that traditional allocation and scheduling algorithms do not influence the resulting data propagation delay constraints and that augmenting the task-set with JLDs increases the system schedulability (task-local deadlines and data propagation delay constraints) by up to 42%.
Outline: The rest of the paper is organized as follows. Section 2 discusses related work. In Section 3 the relevant background information is presented. An overview of our approach is presented in Section 4 and the investigated allocation and scheduling algorithms are discussed in Section 5, followed by the implementation of timing analysis and schedulability test in Section 6. Section 7 presents our evaluation results, and conclusions are drawn in Section 8.
2 RELATED WORK
Scheduling and timing analysis of periodic multi-rate applications is essential in many industrial domains, such as automotive [18] or avionics [13].
Several works address the timing analysis of data propagation delays. Feiertag et al. [12] present calculations for maximum data propagation delays in real-time systems under register communication. They further identify different data propagation delay semantics and highlight their respective importance for system engineers. This analysis is subsequently implemented in several automotive tools [20, 29]. While this work focuses on the implementation level, Becker et al. [5, 7] present a framework to compute data propagation delays at various levels of timing information, and Forget et al. [13] study the formal verification of data propagation delays in multi-periodic synchronous models. Frise et al. [17] present a timing analysis approach for data propagation delays in automotive multi-core platforms under different communication models that are based on constraint modeling.
Mubeen et al. [27] focus on the selection of tasks period in order to meet data propagation delay constraints. Schlafow et al. [38] assign priorities, offsets and processor mapping to tasks such that data propagation delay constraints are met on a multicore platform. The Logical Execution Time (LET) model [21] is further considered to realize deterministic data propagation delay in automotive systems as it decouples the data propagation delay from the tasks execution [10, 18].
Alternatively to influencing data propagation on the implementation level, Becker et al. [7, 8] analyze all possible data propagation paths in a system and then generate an ordering of selected task’s jobs such that data propagation delay constraints are met. JLD constraints are considered in [14] for fixed-priority scheduled systems, and in [30] for dynamic priority scheduled systems. Both works target single processor systems. Time-triggered schedules subject to such precedence constraints are further investigated for many-core platforms in [9, 33, 35].
The work presented in this paper differs from related work in that it extends the design flow of an existing compiler-based tool, the RTSC [37], to consider JLD constraints that are generated by the methods described in [7] such that the applications’ data propagation delay constraints are met. Integration into the RTSC leverages the already existing flexibility of this platform, such as support for a large number of available scheduling and allocation algorithms and executing resulting schedules based on LitmusRT [11] and other platforms. A systematic evaluation of a large number of applications that are subject to data propagation delay constraints is performed using various combinations of allocation and scheduling algorithms, targeting a multicore platform. Evaluations focus on metrics that are important from a theoretical as well as practical perspective.
3 BACKGROUND
This section provides the required background information, with the system model and the data age constraint that are the main focus of the paper. We further describe the different parts of the Real-Time Systems Compiler and its transformation mechanisms.
3.1 System Model
This section first describes the basic application model and the data propagation delay constraints that are typically found in automotive systems. In order to transform these timing constraints on the data propagation into direct scheduling constraints, JLDs are used.
3.1.1 Application Model. One application is described by the task set $\Gamma$, where $\Gamma$ contains $n$ periodically activated tasks. A task is described by the tuple $\{C_i, T_i\}$. $C_i$ describes the task’s worst case execution time (WCET), and $T_i$ describes its activation period. Each task has an implicit deadline $D_i = T_i$. The hyperperiod of the task set is described by the least common multiple of all task periods, $\text{lcm}(\Gamma)$. The $j^{th}$ job of $T_i$ is depicted as $\tau_j^i$. A cause-effect chain $\xi$ is represented by a directed acyclic graph (DAG), and contains a set of vertices $V$ and a set of directed edges $E$. Each vertex represents a task $\tau \in \Gamma$, and each edge constitutes a communication between the two tasks. Such a chain can have forks and joins, but the initial task and the final task must be the same for all paths of the chain [2]. As timing properties of each possible data path are of interest a chain can be decomposed into several sequential chains, in the remainder of the paper, we only focus on sequential chains.
Communication between different tasks is realized via register communication. This communication form utilizes shared variables for communication, where a sender task writes to the shared variable and a reader task reads from it. As there is no signaling between tasks, the tasks can execute independently of each other. To further increase the determinism in communication, the tasks execute based on the read-execute-write semantics. In this execution model, a task creates local copies of all input variables at the beginning of its execution. During the execution phase, only those local copies
3.1.2 Data Propagation Delay Constraints. In order to render the correct functionality of the system, data propagation delay constraints can be specified on a cause-effect chain $\zeta$. This is for example the case in control applications, where sensor data may be sampled by one task, while a second task executes the control algorithm, and finally, a third task triggers the actuator with the updated data. As these tasks may be activated at different periods, over- and under-sampling may occur. Because of this, the same input value may affect the output of the chain multiple times.
Several data propagation delay constraints can be specified [12]. In this work, we focus on the Maximum Data Age constraint, as this constraint type is most important for control applications. Our approach focuses on the integration of job-level dependencies (JLDs) in the scheduled system in order to meet data propagation delay constraints. Thus, the approach is applicable to other data propagation delay metrics if the JLDs are selected targeting the respective constraint type, as shown in [8]. The data age is a metric that describes the relative age of data, from sampling by the first task in the chain until the last corresponding output is produced by the last task of the chain. Fig. 1 shows an example of the data age in a system of two tasks, $A$ and $B$. The two tasks are activated at different periods. $A$ has an activation period of $T_A = 4$ time units and $B$ has an activation period of $T_B = 2$ time units. Hence, over-sampling is observed as $B$ reads its input values more frequently than $A$ produces new values. This can be seen, as the first and second job of $B$ both consume the same value that has been produced by the first job of $A$. In this example, the maximum data age spans from the start of execution of the first job of $A$ until this value has its last effect on the output, when the second job of $B$ terminates.
3.1.3 Job-Level Dependencies. To ensure that all specified data propagation delay constraints are met, JLDs are specified on the task set $\Gamma$. A JLD constrains the execution order of specific jobs of two tasks. In this way, possible data propagation between these tasks can be influenced. For example, consider two tasks $A$ and $B$ that are adjacent in a cause-effect chain $\zeta$, where $T_B$ is $2 \cdot T_A$. The analysis of all possible data propagation paths of $\zeta$ [7] shows that there exists a data propagating path in which data propagates between the job $r_A^1$ and the job $r_B^2$. For this path, the maximum possible data age exceeds the specified data age constraint. By specifying a precedence constraint between the task job $r_A^1$ and $r_B^2$, it is guaranteed that $r_B^2$ never reads the data that is produced by $r_A^1$ as $r_B^2$ overrides the data before $r_A^1$ is executed. Consequently, the data propagation path that violates the specified data age constraint is avoided as long as the precedence constraint between the two jobs is met.
A JLD is defined as $(k,l) \rightarrow (j,i)$, where $j$ is the sender task, and $i$ is the receiver task. The indices $k$ and $l$ relate to the specific task jobs that are constrained, i.e., the JLD specifies that the job $r_j^l$ must have finished executing before the job $r_i^j$ starts. Note that a JLD is always specified concerning the hyperperiod of the two tasks $\text{lcm}(r_j^l, r_i^j)$ and repeats itself over the complete hyperperiod of the task set. If two tasks have the same period, both tasks only execute one job during their hyperperiod $\text{lcm}(r_j^l, r_i^j)$. Thus, $k$ and $l$ are 1.
In [7], a heuristic method is shown that generates a set of job-level dependencies such that all specified data propagation delays are met. This has the advantage that, as long as all specified job-level dependencies are satisfied, the data age constraints are met as well without explicitly considering the data propagation delay constraints during scheduling. In this work, the MECHAniser tool [6] is used to generate the JLDs based on the methods of [7].
3.2 The Real-Time Systems Compiler
The Real-Time Systems Compiler (RTSC) [16, 37] is a flexible generic real-time systems transformation tool based on the LLVM. It operates directly on the source code of soft, firm or hard real-time applications enriched by a real-time task database and extracts their fine-granular OS and architecture agnostic intermediate representation called Atomic Basic Block (ABB) graphs that captures all relevant timing and structural characteristics. Since ABB graphs neither depend on a particular real-time paradigm nor a specific OS, they serve as the basis for arbitrary real-time-invariant-preserving transformations. The ultimate goal is to decouple functional and real-time development of a real-time application and thus to be able to quickly deploy the same application on different hardware, OS, and real-time paradigms and thus quickly assess their impact on the application’s performance. Since in this paper we evaluate the impact of different allocation and scheduling algorithms, as well as different multi-core configurations on the worst case data age of event chains, the subsequent presentation of RTSC internals, will focus on generating multi-core time-triggered systems from event-triggered input systems.
3.2.1 Atomic Basic Blocks. The ABB graph representation [36] of real-time systems was inspired by the basic-block intermediate representation found in compilers. While basic blocks begin and end with instructions that are the target or source of a branch in the function-local CFG, ABBs are started and terminated by a branch in the global control flow of the real-time system. Such instructions are called ABB terminations and consist of system calls such as triggering tasks, setting and waiting for event flags, mutual exclusion and sending data from task to task. Depending on the ABB termination’s semantics the ABBs are connected by appropriate ABB dependencies which then describe the cross-function and cross-task relationships in the real-time system. Consequently an ABB consists of one or more basic blocks of a function. Each ABB has a unique entry basic block, which is the only basic block in the ABB’s control flow that may have predecessors in the control flow of the function that are not part of the ABB. Each ABB has at most one exit basic block. Since the semantics of system calls are traced by ABB dependencies the operating system calls can be removed which allows the system to be represented in an OS and hardware agnostic fashion. Each ABB can be executed on its own without further interference with other parts of the system, as long as its dependencies are fulfilled ABBs. This atomicity makes them ideal fine-granular scheduling entities for the RTSC.
1The tool is freely available at www.mechaniser.com
3.2.2 Real-Time Task Database. Although ABB graphs already capture all internal, structural properties of the real-time system, these graphs do not yet have a connection to the environment. This connection is established by the real-time task database. This database contains events, which can either be periodic or non-periodic, and activate a task. Tasks can be attributed with a soft, firm or hard relative deadline \( d \). Additionally, periodic events carry a period and jitter, while non-periodic events only have a minimal interarrival time. Tasks are composed of a root subtask and zero or more additional subtasks. These subtasks are connected by directed and undirected dependencies and become ready for execution as soon as their parent task’s event has occurred, and all of their dependencies are satisfied. The instantiation of a subtask’s ABBs are called jobs and are created as soon as the subtask becomes ready. Subtasks are decomposed into ABBs by the RTSC, a process which will be described in detail in the next section.
3.2.3 Real-Time Systems Processing. This subsection will present the steps performed by the RTSC to map a source real-time system to the target system. Likes other compilers, the RTSC is composed of source-system-architecture specific front ends, a middle end, and target-system-specific backends.
Front End. The RTSC’s front end is responsible for converting the real-time system to the intermediate representation of ABB graphs. First, the identifiers of the subtasks stored in the real-time task database are associated with the respective handler functions in the real-time applications. Next, local ABB graphs are created for individual functions. ABB terminations are identified and basic blocks that would contain one or more terminations in the middle are split. Terminations are found by identifying all system calls in the function, which makes clearly defined system call semantics and knowledge of the called function at the call site mandatory. The resulting ABBs are connected by implicit dependencies, tracing the CFG gleaned from the relationship of the basic blocks, resulting in local ABB graphs. These graphs are connected to a global ABB graph by identifying compatible ABB terminations, for example, those that establish a producer-consumer relationship and refer to the same system object. After cleaning all system calls the resulting ABB graphs are connected to a global ABB graph and are merged wherever this is advantageous. This way, whenever two neighboring ABBs are connected by control flow already present in the basic blocks, a combined busy interval is created that contains both ABBs, effectively removing one entry from the schedule table. In some cases, however, despite all the steps the RTSC takes to avoid this kind of situation, an ABB that is not a function entry ends up at the start of an interval. Since the timer interrupt handler has to enter this ABB by executing a function call, a function wrapper for the interval is generated that takes the necessary state for continuing the control flow as a parameter. Likewise, whenever an interval ends with an ABB that is not a function exit, the RTSC generates code that stores the necessary state for continuing execution. The result of all performed processing steps allows the RTSC’s backends to generate executable code.
Middle End. The goal of the middle end is to prepare the real-time system for the code-generation step in the back end. To this end, an allocation of ABBs to processors and a schedule table for each processor is calculated. Once the RTSC enters the middle end, all transformations take place in the context of the target system. This is important since properties like the WCET of individual ABBs, which is necessary for scheduling and allocation, can only be determined for the target architecture, and not in a generic fashion.
The first step in the middle end is to calculate the hyperperiod of the real-time system as the least common multiple of the periods of all events. To fill the hyperperiod, ABBs and connecting ABB dependencies are cloned accordingly, which is necessary since in scheduling and allocation ABBs serve as jobs, and each job can only be scheduled precisely once per hyperperiod. Next, the global ABB graph is linearized. This prevents control-flow-graph structures like separate branches that can never be executed within the same hyperperiod from being scheduled without need. Dependencies are moved out of loops and branches and logical guards are inserted that make them semantic. After that mutually exclusive branches are merged into one ABB, creating a linearized ABB graph. This is needed to facilitate the WCET analysis of each ABB, which is done by the external tools aiT from Absint\(^2\) or platin [22], depending on the target architecture. Additionally WCET annotations can be used to skip this computational expensive task in the case that WCETs are already known.
Now all information necessary for allocating and scheduling the ABB graph is available. The RTSC offers multiple heuristics and optimal approaches for solving the allocation and scheduling problem and generating a time-triggered schedule for each of the available processing nodes. Since the impact of allocation and scheduling algorithms on the data age of event chains is the subject of this paper we will go into greater detail on this topic in Section 5 and assume for now that we found a feasible assignment and schedule.
To generate an executable real-time system, the RTSC’s ABB graphs still have to be post-processed. Not every ABB can be executed directly since not every ABB constitutes the beginning of a function. Wrapping every ABBs in functions, comes with a runtime cost as additional code is inserted and function state has to be transferred. Therefore during scheduling measures are taken that make it unlikely that functions are scheduled in an interleaving manner. In post-processing time intervals that have been assigned to individual ABBs are merged wherever this is advantageous. This way, whenever two neighboring ABBs are connected by control flow already present in the basic blocks, a combined busy interval is created that contains both ABBs, effectively removing one entry from the schedule table. In some cases, however, despite all the steps the RTSC takes to avoid this kind of situation, an ABB that is not a function entry ends up at the start of an interval. Since the timer interrupt handler has to enter this ABB by executing a function call, a function wrapper for the interval is generated that takes the necessary state for continuing the control flow as a parameter. Likewise, whenever an interval ends with an ABB that is not a function exit, the RTSC generates code that stores the necessary state for continuing execution. The result of all performed processing steps allows the RTSC’s backends to generate executable code.
Back End. In the backend, the RTSC generates configuration files and an application scaffolding for the real-time application. The configuration files contain the schedule tables. In a final step before generating the executable code for the target system all remaining annotations are removed, and, after that, assembly code is generated. Besides OSEKTime the RTSC’s backend is capable of generating time-triggered systems aiming Litmus\(^2\) which was the target platform for the evaluation performed in this paper.
4 APPROACH
In order to satisfy data propagation delay constraints in real-time systems, specifying JLDs is one proposed method that augments a traditional task model such that data propagation delay constraints are met [6–8]. This approach has the benefit of being agnostic of the underlying hardware platform or scheduling algorithm. If specified JLDs are met by any scheduler on any platform, the specified data propagation delay constraints are implicitly met as well. Though this is beneficial if such design decisions are not yet decided, the question arises what happens to data propagation delays on the actual hardware and real-time operating system (RTOS).
To incorporate the generated JLDs into a complete development chain, the RTSC is chosen as a shortcut from high-level system analysis to the evaluation of concrete systems design. As the generated JLDs, the ABB graph itself is agnostic of a concrete hardware platform and scheduling algorithm [16, 37] but automatically applies
\(^2\)https://www.absint.com/ait/
Timing Analysis
5 ALLOCATION AND SCHEDULING
To put the Multi-rate Effect Chains AnaliSer (MECHAniSer)’s guarantees and the effect of JLDs to the test with real schedules a variety of combinations of allocation and scheduling algorithm has been evaluated.
5.1 Allocation
Allocation by Peng and Shin’s optimal algorithm (PS). For optimal (if an allocation and a schedule exists that is valid and feasible, it will be found) allocation of ABBs to the available processing cores the RTSC uses a specialized branch and bound algorithm based on the one due to Peng et al. [32]. Branch and bound creates an initial solution from which refined solutions are derived successively, potentially exploring the complete solution space. This property ensures the algorithm’s optimality but also means that its worst-case runtime is exceptionally high. However, on average, branch and bound finds an acceptable solution quite quickly.
In Peng et al.’s algorithm two kinds of solutions exist: Incomplete ones in which only some jobs have been assigned to processors, and complete ones in which all jobs have been assigned. Incomplete solutions have to be refined further while complete ones are candidates for a feasible assignment.
Peng et al.’s algorithm calculates refined solutions from an incomplete one by assigning the next unassigned job to each processor in turn, i.e., for a real-time system that is to be mapped to a four-processor machine, from each incomplete solution four refined solutions are derived. For each incomplete solution a lower bound of the cost is calculated by generating a local schedule for each processor and estimating a regular measure of its cost from the completion time, deadline and release time of each job. The lower bound of the cost of a solution is calculated as the maximum of the cost of all its jobs. The cost of an incomplete solution is only a lower bound of the cost since the cost of unassigned jobs is taken into account under optimistic assumptions, and a job’s predecessor’s release time is used as a lower bound for its completion time if the predecessor runs on a different processor than the successor. Even for complete solutions, the resulting schedule may, therefore, be invalid since successors may start before their predecessor’s result is available.
An upper bound of the cost is calculated for complete solutions. If an incomplete solution has a lower bound of the cost that is higher than the current best complete solution’s upper bound, it is discarded immediately. This is justified since the cost of a refined solution can only be the same or higher than its parent’s cost. Furthermore, discarding solutions means that large parts of the solution space do not have to be explored since all children of a discarded solution are eliminated from the search space as well. Peng et al.’s algorithm terminates once a complete solution has been found and no incomplete solution remains that can result in better cost than the current best complete solution. The result of Peng et al.’s algorithm minimizes the maximum cost of the assignment, and, given an appropriate measure of cost, the probability of missing a deadline at runtime, even if timing parameters like the estimated WCET are violated [31]. A regular cost often used for real-time systems and thus also in the RTSC is the Normalized Task Response Time (NRT). Thus Allocation by Peng and Shin (PS) allocation minimizes maximal NRT of the allocation. Due to its general approach, additional optimization subject can be specified. The default implementation, for example, is also parametrized to minimize the usage of cores and thus leaves cores unused if not necessary for schedulability. In contrast, the modified PS/maxCore is parametrized to use as many cores as possible.
Heuristic Approach. In addition to this complex and resource-consuming near-optimal solution, several well-known heuristic algorithms for allocation have been implemented. Each of these heuristics computes the utilization \( u_{ABB} \) as the fraction of its WCET \( C_{ABB} \) and its relative deadline \( d_{ABB} \) \( u_{ABB} = \frac{C_{ABB}}{d_{ABB}} \) of each ABB.
An ABB fits on a processing node if the sum of the already allocated ABBs on this node adding that of the current node is less than one \( \sum u_{aloc} + u_{ABB,curr} \leq 1 \). The FirstFit algorithm always starts at the first processing node and places the current ABB at the first core that has enough capacity left [23]. A slight variation of this approach is the NextFit algorithm [23]. Here the core that the last ABB has been allocated to is saved and serves as a starting point for the allocation of the next ABB, which is again placed on the first suitable core. Instead of allocation to the first fitting processor, the BestFit and WorstFit algorithms iterate over each node for every ABB and compare the resulting load on each node. The BestFit algorithm then allocates the ABB to the core that would have the highest resulting computational load i.e., the core’s slack fits best to \( U_{ABB} \). In contrast, WorstFit locates the current ABB to the core with the lowest resulting load and thus distributes the computational load evenly to the available cores. A more naive heuristic allocation is RoundRobin. It iterates over all cores and evenly distributes ABBs on cores. As long as the ABB fits, no further qualification is performed.
\[ A \text{ measure of the cost of a job is regular if increasing completion time means non-decreasing cost.} \] This is a precondition for the correctness of the branch and bound scheme.
It is worth noting that in contrast to the PS allocation algorithm none of the heuristics consider dependencies and thus also neglect the JLDs.
5.2 Scheduling
After an allocation has been found a feasible and valid schedule for the ABBs on each core has to be determined. Again we compared two different algorithms for scheduling data-age constrained systems.
Earliest-Deadline First Scheduling. The first and most generic scheduling algorithm used in the RTSC is based on the well-known Earliest Deadline First [25] scheduling algorithm and the principle of branch and bound [1]. For each moment in time, the algorithm prioritizes the next ABB that is runnable, and has that the most urgent deadline. The EDF-implementation in the RTSC supports processor-local as well as cross processor dependencies and the generation of preemptive schedules. Cross-processor dependencies are repaired in a way similar to the optimal allocation algorithm. It, therefore, is optimal in the sense that if no dependencies exist, it finds feasible schedules for core allocations whose computational load is below one [1].
minimax Scheduling. The second approach to scheduling uses the cost-optimal schedule calculate during optimal processor allocation [3, 32] as a basis. However in contrast to Peng et al.’s original algorithm the version implemented in the RTSC attempts to repair violated cross-processor dependencies similar to the approach of Abdelzaher et al.: It generates new child solutions from violating solutions. The EDF-implementation in the RTSC supports processor-local as well as cross processor dependencies and the generation of preemptive schedules. Cross-processor dependencies are repaired in a way similar to the optimal allocation algorithm. It, therefore, is optimal in the sense that if no dependencies exist, it finds feasible schedules for core allocations whose computational load is below one [1].
6 TIMING ANALYSIS AND SCHEDULABILITY TEST
This section discusses the integration of a timing analysis engine for the data age within the RTSC and a necessary schedulability test that detects JLD settings that lead to unschedulable systems caused by cyclic dependencies.
6.1 Timing Analysis and Implementation
As the RTSC was extended to account for data propagation delay constraints, the RTSC internal analysis engines need to be extended as well. In the RTSC the goal is to analyze the data age of these systems in a generated schedule.
As a result of the schedule generation using the RTSC, a schedule table is produced. This table includes the start and completion time of each ABB as well as its core-mapping. For a job \( r_i \), the start time is denoted as \( s(r_i) \), and the finish time as \( c(r_i) \). With the specified JLDs, the RTSC guarantees that for jobs of tasks that are constrained by a JLD the earliest start time of the successor job is always larger or equal than the latest finishing time of the predecessor job. This means, due to the properties of the JLD generation, a schedule that satisfies all JLDs also automatically meets all specified data propagation delay constraints [7]. As it is often essential how significant the actual worst-case data propagation delay in a system is, timing analysis of the generated system needs to be performed [5, 12].
To extend the timing-analysis engines of the RTSC to analyze the maximum data age, the cause-effect chain semantics and their timing constraints are integrated with the RTSC. This information is needed when analyzing the schedule table.
A new component within the RTSC is responsible for the timing analysis of a cause-effect chains’ data age. The maximum data age of the generated systems is computed by traversing backward from each job \( r_k \) of the last task \( r_i \) of a cause-effect chain \( c_i \). The implemented algorithm recursively selects the first job of its respective predecessor task (in the cause-effect chain \( c_i \)) that is finishing before the start time of the current job. Once a job of the first task of the chain is reached (let’s say \( r_j \)), the data age can be computed as: \( c(r_j) - s(r_j) \). Note that only the first and last job of such a data path is required to compute the data age [12]. The maximum data age of a cause-effect chain \( c_i \) is the maximum value of all data age values that are computed for each job of \( r_i \).
6.2 Cyclic Job-Level Dependencies
As the heuristic algorithm [7] that is used to generate the JLDs sequentially assigns dependencies to the different cause-effect chains of the system, cases have been observed in which cyclic dependencies were generated.
Since tasks can be part of multiple cause-effect chains, it is possible that cycles in the graph of all chains’ data paths exist. In such cases, the heuristic can specify cyclic dependencies in the system. While cycles in the graph of all data propagation paths are allowed, cycles in the specified JLDs are not, as no valid schedule is possible that can fulfill such JLDs.
Testing the JLDs that are generated for the system for cycles represents a necessary condition for schedulability. I.e., if a cycle exists, no schedule exists that can satisfy all JLDs. This check for schedulability can be efficiently implemented in order to detect unschedulable systems before the complete system generation with the RTSC is triggered.
7 EVALUATION
In this section the evaluation results are presented. First, the system generation process is discussed, and the basic properties of the process are shown. We then evaluate a large number of randomly generated systems using the proposed approach. Here different schedulability criteria are of interest, as well as the resulting response time and data age measures. Finally, the effect of additional cores on the resulting data age is evaluated.
7.1 System Generator
For the experiments, random systems are generated based on characteristics of automotive applications that are reported in [24]. Each system contains randomly generated task sets. Task periods are selected out of the set \{1, 2, 5, 10, 20, 50, 100, 200, 1000\} ms, and WCETs are selected out of the range \{50, 150\} \mu s.
Each generated cause-effect chain can have 1–3 different involved periods, where 20% or the chains contain only tasks of the same period, 40% contain tasks of two different periods, and 40% of the cause-effect chains contain tasks of three different periods. For each period value selected for a chain, 2–5 tasks are involved with a probability of 30%, 40%, 20%, or 10% respectively. The data age constraint for a chain is generated by multiplying a random factor with the chains hyperperiod (i.e., the hyperperiod of all tasks that are part of the chain), in the range \{1, 8, 2, 5\}. Note that tasks of the
same period always appear sequentially in the cause-effect chain and the same task can be part of multiple cause-effect chains [24]. If the same subset of tasks is part of multiple cause-effect chains, these tasks always appear in the same ordering in both chains.
The system generation is performed based on the following steps:
- Generate the blueprint for each cause-effect chain. This includes the number of activation patterns, and the number of tasks as well as the assigned period for each activation pattern.
- Generate random tasks such that each chain can be filled. I.e., for each activation period, generate the number of tasks that are maximally assigned to any of the cause-effect chain blueprints.
- Generate random tasks until the task set utilization is reached.
- For each cause-effect chain, randomly pick tasks from the task set that have the activation period which is defined by the respective activation patterns of the cause-effect chain.
- Finally, assign a data age factor is selected with uniform distribution in the range of $[\text{ageMin, ageMax}]$. This factor is then multiplied with the hyperperiod of the cause-effect chain.
Generating the systems in this manner allows controlling the cause-effect chain characteristics in a precise way.
For each system the JLDs are generated using the methods of [6, 7].
7.1.1 Success-Rate JLD Generation and Cyclic Dependencies. In this section, the JLD schedulability (i.e., the algorithm of [7] finds a valid setting for JLDs) is compared in a version with and without a check for cyclic dependencies. The generated systems include task sets of an average utilization of 1.9. 300 random systems generated with a varying number of chains, while all other system parameters are kept constant.
This means, the more cause-effect chains are part of the system, the more interleaved the system gets. I.e., tasks can be part of more than one cause-effect chain. For this experiment, the distribution of how many chains a task is a part of is shown in Fig. 3b in respect to the number of chains in the system.
Fig. 3a shows the relation of systems where the heuristic of [7] reports a JLD configuration, compared against the systems that do not experience cyclic dependencies (as discussed in Sec. 6.2). It can be seen that the systems that are subject to cyclic dependencies are increasing with the number of cause-effect chains in the system. The more entangled the system is, the more likely it becomes that no valid setting for JLDs is found.
As the focus of this work is to analyze the influence of JLDs on the system properties, in the remainder, only systems where dependencies are generated successfully are further considered, as the primary objective is to evaluate the performance of systems with and without JLDs in different target systems.
7.1.2 Generated Task-Sets for the Main Evaluation. To compare the MECHAniSer’s analysis with concrete schedules and examine the effect of the heuristic JLD creation we generated 433 systems using the described system generation process. The systems are generated with utilization between 0.6 and 2.0 and an average of 1.2. Each generated system has between 59 and 1000 jobs with an average of 458 and comprises 1 – 3 event chains. For all these systems corresponding C code as well as the real-time database needed for the RTSC have been generated. The C code mostly consists of dummy loops that retain the timing properties of the task and enforces data propagation and if necessary JLDs. Then all systems have been analyzed by the RTSC for maximum data age with all available combinations of allocation and scheduling as explained in Section 5 for up to four cores each with and without JLDs. This sums up to a total of 27712 RTSC runs and leads to about 376 CPU-hours on our experiment cluster.
7.2 Task-local Deadlines and Maximal Response Times
To compare the schedulability ratio\(^2\), with and without generated JLDs, different allocation and scheduling algorithms have been evaluated on a varying number of available cores. Figure 4a shows the schedulability ratio of systems without JLDs whereas Figure 4b depicts the same with JLDs in effect. In both experiments, the trend looks similar. While fewer than 50% of the systems are schedulable on one core, the schedulability ratio increases to almost 100% for systems with three cores. It can further be seen that the addition of JLDs does not strongly impact the schedulability ratio. The maximum reduction in schedulability (on average over all cores), when adding JLDs can be observed for BestFit with 5%.
Since the utilization of the systems is below two FirstFit, NextFit and BestFit by design, do not utilize more than two of the available cores and therefore do not improve as more cores become available. In contrast, the WorstFit heuristic which uses as many cores as available is surprisingly competitive to the optimal allocation algorithms when more than two cores are available. Due to the systems’ utilization, the most laborious task for allocation algorithms is the case where the additional effort put in the optimal allocation pays off: PS finds for all three scheduler configurations the most schedulable systems.
The same holds true for the maximal normalized response times (max NRT) depicted in Figure 5: Even for one core the optimal scheduling algorithms produce less maximum normalized response times in the mean, but as soon as more possibilities for the allocation exist, the optimal allocation algorithms result in smaller response times, while the difference between EDF and optimal cost-based scheduling is not as strong but exists.
From this experiment, we can see that augmenting the task set with JLDs in order to meet data propagation delay constraints does not strongly impact the schedulability ratio under consideration of task local deadlines only.
\(^2\)In the remainder, selected evaluation results are shown. The complete set of obtained figures is available at www4.cs.fau.de/Research/RTSC/experiments/mechaniserstatic
---
**Figure 3: Success rate in generating JLDs, and the distribution of tasks to chains in the experiment.**
Figure 4: Impact of JLDs on task-local schedulability
Figure 5: Impact of JLDs on max normalized response time
Figure 6: Impact of JLDs on combined task-local and data-age schedulability
Figure 7: Impact of JLDs on data-age schedulability
7.3 Data-Age as Schedulability Criterion
For many industrial domains [13, 18] it is equally important to meet task-local deadlines as well as all specified data propagation delay constraints in order to deem a task set schedulable. While the previous section only focused on task-local deadlines as schedulability criterion, this section takes data propagation delay constraints into account.
We first consider systems that meet all task-local deadlines and where all data-age constraints are considered for the schedulability criterion (Figure 6). After, all systems that meet their data-age constraints regardless of the task-local deadlines are seen as schedulable (Figure 7). This is important for systems where a bounded number of task-local deadlines can be missed without affecting the performance (for example control applications [40]).
7.3.1 Task-local Deadline and Data Age Schedulability. As the MECHAniSer guarantees that systems that impose JLDs and meet all task-local deadlines also meet all data-age constraints, the task-local schedulability with JLDs in Figure 4b is identical to the combined schedulability in Figure 6b. In comparison to the schedulability of systems where no JLDs are considered (Fig. 6a) the addition of JLDs has a maximum improvement (average over all cores) of 42% for RoundRobin.
Looking at Figure 6a, where no JLDs are considered, there is only a slight increase in combined schedulability by increasing the number of cores for systems that are RoundRobin, WorstFit or PS-allocated. For one and two cores the combined schedulable systems are approximately half of the task-local schedulable systems but do not increase with three or four cores.
7.3.2 Data Age Schedulability. To further identify the source of this observation we look at the schedulability ratio of systems where only data propagation delay constraints are of importance and task local deadlines may be violated. In Figure 7a it can be seen that the percentage of systems that meet their data age constraints is not affected by the selected allocation and scheduling algorithm. It can also be seen that the number of available cores also does not affect the schedulability.
In contrast Figure 7b shows that introducing JLDs boosts meeting data-age constraints even for systems that do not meet all task-local deadlines, since even with one core most data-age constraints are met (some systems are overloaded when only one core is available). For two and more cores RoundRobin, WorstFit or PS-allocated systems reach 100% data-age schedulability and even the heuristics that never accomplish more than 50% task-local schedulability reach more than 95% data-age schedulability with JLDs.
Thus, it can be observed that traditional allocation and scheduling algorithms do not have a direct effect on meeting data propagation delay constraints. The addition of JLDs as scheduling constraints can improve the total system schedulability while its impact on task-local schedulability is minimal (as shown in Section 7.2).
7.3.3 Resulting Maximum Data Age. To put this effect further into perspective Figure 8 and Figure 9 depict the effect of adding JLDs to the systems’ cause-effect chains’ maximum data age relative to the event-chains data-age requirements (i.e., a value larger than 1 indicates a violated data age constraint). While Figure 8 considers systems that have been allocated and scheduled to one core Figure 9 depicts the same systems and event-chains allocated to four cores. The (a) subfigures comprise only schedules that meet all task-local deadlines and therefore comply with the assumptions that the
MECHAniSer uses to compute its data-age guarantees while the (b) subfigures consist of all 433 systems. In addition to the resulting data age values based on allocation and scheduling algorithm, here also upper bounds given by MECHAniSer are shown. Again it can be observed that adding more cores but no JLDs does not have a substantial effect and the mean maximum data-age. On the other hand, enforcing JLDs as suggested by the MECHAniSer drastically improves the timeliness of event-chains, even for systems that do not meet their assumptions. However, especially PS + minimax generate massive outliers, and no guarantees can be given without meeting all task-local deadlines as these results are only statistical. It can be seen that the upper bounds that are provided by the MECHAniSer also no longer hold for systems where task-local deadlines can be missed, as the underlying assumptions for the timing analysis have been violated. Additionally, one can note that allocation + scheduling does not affect the distribution of the cause-effect chains’ maximum relative data-age.
7.4 Do many cores improve data age?
As platforms with an increasing number of cores become available, this experiment investigates the effect of the number of cores on the data age properties.
Since optimal allocation of hundreds of jobs for many cores is resource consuming and the solution space grows exponentially with the numbers of cores three of the 433 systems were selected. The selected systems comprise 99, 305, 361 jobs respectively and have utilizations of 1.2, 1.6, 1.8 and 1.2, 2 event-chains. This gives a total of 43 data age values since in this experiment we do not only analyze the maximum data-ages of each chain as before but consider all data-propagation paths within the hyperperiod. Again we used all combinations available to allocate and schedule these systems but chose to depict those allocations that use as many cores as possible. Therefore, Figure 10a shows PS maxCore modifications and Figure 10b the RoundRobin heuristics. Please note that PS/maxCore could only be executed for up to 128 cores due to the resource requirements of the algorithm (more than 500 GB of RAM for allocations with more than 128 cores).
Both graphs show that instead of improving i.e., lowering data age values, adding additional cores had mostly the contrary effect. The addition of the second core improves the data-ages for the optimal allocations since the second core is needed to meet all task-local deadlines. For the systems without JLDs, the average data age is at its minimum value when the system becomes schedulable at two cores. After that, the addition of cores increases the average data age. This is the case for the optimal allocation as well as for the RoundRobin allocation.
Similar effects can be observed for systems where JLD are considered. However, the maximum data age values do not increase over the data age constraint (except for the optimal allocation on one core, which is a result of an unschedulable system). In Figure 10a systems with JLDs still decrease data ages up to four cores but with more than four cores all allocation algorithms cease to improve on data age and start to increase data ages.
For both settings, once the system becomes schedulable, with the addition of cores, it becomes likely that two tasks that are consecutive in a cause-effect chain do no longer execute in sequence. This holds true for all other graphs not included in the paper due to space limitations. By adding more cores, the parallelization of previously sequentially executed communicating tasks becomes more likely which increases the likelihood that a task has to wait for an additional hyperperiod to consume the value of its predecessor task. The JLDs do not prevent this increase in data age but guarantee that the increase is bounded by the specified constraint. Thus, in system design with data propagation delay constraints, over-provisioning the system negatively affects the data age.
8 CONCLUSION AND OUTLOOK
In this paper, we extended the MECHAniSer and the RTSC to work closely together and thus bring data-age analysis closer to real systems and real executions. This collaboration enabled us to gain detailed insights into the behavior of contemporary allocation and scheduling algorithms in respect to data age constraints. Moreover, we could examine how additional computational resources influence the data-ages of concrete systems.
The most important insight is, that traditional allocation and scheduling approaches that focus on task-local deadlines are unsuitable for optimizing the data age of cause-effect chains, as long as these are not modeled correctly by dependencies. Moreover, we have shown that the same holds true when the systems computational capabilities are increased by adding additional cores to the hardware platform. One of the critical observations is that the resulting data age values increase when additional compute cores become available. This is contrary to common conceptions and must be carefully taken into consideration during system design. On the other hand, this shows how important it is to model dependencies from the beginning and if this is not possible how useful tools like the MECHAniSer are.
Future work will focus on runtime experiments of the systems in order to measure the resulting runtime data age values as well as the OS overheads introduced by the JLDs. Also, the extension of the existing optimal PS allocation and the minimax scheduler to directly minimize data propagation delays.
|
{"Source-Url": "https://www4.cs.fau.de/Publications/2018/klaus_18_rtns.pdf", "len_cl100k_base": 11383, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38420, "total-output-tokens": 12009, "length": "2e13", "weborganizer": {"__label__adult": 0.0004673004150390625, "__label__art_design": 0.0005311965942382812, "__label__crime_law": 0.0004100799560546875, "__label__education_jobs": 0.00075531005859375, "__label__entertainment": 0.00011372566223144533, "__label__fashion_beauty": 0.0002658367156982422, "__label__finance_business": 0.0005049705505371094, "__label__food_dining": 0.0004458427429199219, "__label__games": 0.0011205673217773438, "__label__hardware": 0.006244659423828125, "__label__health": 0.0006575584411621094, "__label__history": 0.0005092620849609375, "__label__home_hobbies": 0.00017440319061279297, "__label__industrial": 0.0017251968383789062, "__label__literature": 0.0002868175506591797, "__label__politics": 0.00042057037353515625, "__label__religion": 0.0007834434509277344, "__label__science_tech": 0.29345703125, "__label__social_life": 8.922815322875977e-05, "__label__software": 0.01175689697265625, "__label__software_dev": 0.6767578125, "__label__sports_fitness": 0.00038504600524902344, "__label__transportation": 0.0019283294677734375, "__label__travel": 0.0002884864807128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56068, 0.01629]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56068, 0.41205]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56068, 0.93114]], "google_gemma-3-12b-it_contains_pii": [[0, 5211, false], [5211, 12694, null], [12694, 19576, null], [19576, 28063, null], [28063, 33667, null], [33667, 40451, null], [40451, 46617, null], [46617, 46859, null], [46859, 50491, null], [50491, 56068, null], [56068, 56068, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5211, true], [5211, 12694, null], [12694, 19576, null], [19576, 28063, null], [28063, 33667, null], [33667, 40451, null], [40451, 46617, null], [46617, 46859, null], [46859, 50491, null], [50491, 56068, null], [56068, 56068, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56068, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56068, null]], "pdf_page_numbers": [[0, 5211, 1], [5211, 12694, 2], [12694, 19576, 3], [19576, 28063, 4], [28063, 33667, 5], [33667, 40451, 6], [40451, 46617, 7], [46617, 46859, 8], [46859, 50491, 9], [50491, 56068, 10], [56068, 56068, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56068, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
16c76fc21dc2ee2b1432e73ab7bf06c2871fc8ab
|
[REMOVED]
|
{"Source-Url": "http://winsh.me/papers/lcpc2015.pdf", "len_cl100k_base": 8663, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 42357, "total-output-tokens": 10463, "length": "2e13", "weborganizer": {"__label__adult": 0.0003638267517089844, "__label__art_design": 0.0003101825714111328, "__label__crime_law": 0.0004494190216064453, "__label__education_jobs": 0.00045108795166015625, "__label__entertainment": 9.268522262573242e-05, "__label__fashion_beauty": 0.00017940998077392578, "__label__finance_business": 0.0002589225769042969, "__label__food_dining": 0.0003857612609863281, "__label__games": 0.0008759498596191406, "__label__hardware": 0.002384185791015625, "__label__health": 0.0006613731384277344, "__label__history": 0.0003552436828613281, "__label__home_hobbies": 0.00011366605758666992, "__label__industrial": 0.0006079673767089844, "__label__literature": 0.00021958351135253904, "__label__politics": 0.0003783702850341797, "__label__religion": 0.000576019287109375, "__label__science_tech": 0.0943603515625, "__label__social_life": 7.94529914855957e-05, "__label__software": 0.01062774658203125, "__label__software_dev": 0.884765625, "__label__sports_fitness": 0.0003886222839355469, "__label__transportation": 0.0006985664367675781, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44829, 0.0184]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44829, 0.27964]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44829, 0.91731]], "google_gemma-3-12b-it_contains_pii": [[0, 3008, false], [3008, 6655, null], [6655, 10479, null], [10479, 13759, null], [13759, 16423, null], [16423, 19212, null], [19212, 22906, null], [22906, 24648, null], [24648, 28523, null], [28523, 32070, null], [32070, 35779, null], [35779, 37526, null], [37526, 39107, null], [39107, 41750, null], [41750, 44829, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3008, true], [3008, 6655, null], [6655, 10479, null], [10479, 13759, null], [13759, 16423, null], [16423, 19212, null], [19212, 22906, null], [22906, 24648, null], [24648, 28523, null], [28523, 32070, null], [32070, 35779, null], [35779, 37526, null], [37526, 39107, null], [39107, 41750, null], [41750, 44829, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44829, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44829, null]], "pdf_page_numbers": [[0, 3008, 1], [3008, 6655, 2], [6655, 10479, 3], [10479, 13759, 4], [13759, 16423, 5], [16423, 19212, 6], [19212, 22906, 7], [22906, 24648, 8], [24648, 28523, 9], [28523, 32070, 10], [32070, 35779, 11], [35779, 37526, 12], [37526, 39107, 13], [39107, 41750, 14], [41750, 44829, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44829, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
17afdf469eb1d2fa2a1cd0a614ae2f83ad51fdc3
|
Chapter 15
Techniques for Dynamic Adaptation of Mobile Services
John Keeney, Vinny Cahill, Mads Haahr
Contents
Introduction
Issues in Dynamically Adaptable Mobile Applications and Middleware
Reflective Middleware
Aspect-Oriented Approaches to Dynamic Adaptation
Policy-Based Management of Dynamic Adaptation
Chisel and ALICE: A Policy-Based Reflective Middleware for Mobile Computing
Conclusions
Introduction
This chapter discusses the dynamic adaptation of software for mobile computing. The primary focus of the chapter is to discuss a number of techniques for adapting software as it runs, and managing the application of those adaptations. In a mobile computing environment the need for adaptation can often arise as a result of a spontaneous change in the context of the operating environment, ancillary software, or indeed the user. To exacerbate this problem, if that contextual change is in some
way unanticipated, then the required adaptation may be itself unanticipated until the need for it arises. For this reason, this chapter is particularly concerned with supporting adaptations that are "completely unanticipated" [19]. The chapter discusses reflective and aspect-oriented techniques for dynamically adapting software for mobile computing. Policy-based management is then discussed as a mechanism to control such dynamic adaptation mechanisms. The chapter then introduces the Chisel dynamic adaptation framework, which supports completely unanticipated dynamic adaptation, and discusses a case study whereby Chisel is used with ALICE, a mobile middleware, to provide a flexible and adaptable middleware framework for mobile computing.
Issues in Dynamically Adaptable Mobile Applications and Middleware
The main difficulty with mobile computing is the inherent scarcity and variability of resources available for use by mobile computers as they move. The primary resource requirement of a mobile device when it is working as part of a distributed system is its network connection, often some form of wireless connection, which when used by a device that is physically moving, suffers from unanticipated and possibly prolonged disconnections [14]. The reason why this issue is such a major problem for mobile computing is that the applications currently being developed are being built as distributed systems applications that do not sufficiently account for these disconnections and reconnections [30].
Middleware for Mobile Computing
"Middleware can be viewed as a reusable, expandable set of services and functions that are commonly needed by many applications to function well in a networked environment" [1]. Traditional middleware systems provide abstractions and shelter applications from the complexities of the underlying environment, communication subsystems, and distribution mechanisms, thereby providing a single view of the underlying environment, as seen in traditional middleware systems such as COM+ [24] Java RMI [39] and CORBA [25].
A middleware system for mobile computing must be flexible in order to provide a homogeneous and stable programming model and interface to possibly erratic execution contexts. It is desirable that an adaptable middleware for mobile computing be open, allowing the application and the user to inspect the execution environment and manipulate the application and middleware in a mobile-aware manner, using application-specific and user-specific semantic knowledge.
Difficulties with Applications and Middleware for Mobile Computing
As environment conditions change, to values unknown and unprecedented by the application designer, the middleware that provides abstractions for these environmental resources must dynamically adapt to support the applications that run on top of that middleware. As stated, one of the primary services provided by the middleware is the ability to supply network communications services as these resources change. A key requirement for middleware for mobile computing is the ability to adapt to drastic changes in available resources, especially network connection availability [15]. The characteristics of the available connections can range from an inexpensive, very high-bandwidth, low-latency connection such as a high-speed wired LAN connection, to a very expensive, low-bandwidth, high-latency connection such as a GSM connection, where each communication protocol used may make use of different communication models and addressing modes.
Mobile computing applications should also be able to handle periods of disconnection, supported by the middleware underneath. The difficulties that are associated with such a range of connection characteristics are further compounded by the fact that these characteristics can change in an unanticipated manner. For example, these disconnections occur when the device moves out of range for wireless connections, or an interface device is suddenly disconnected, as seen when a user suddenly disconnects the device from a synchronization cradle or removes a networking device currently in use. A further issue with such a varied collection of communication technologies that can be leveraged for mobile computing is that the user may not wish to fully use the available resources in an eager or greedy manner to maintain data connectivity. For example, even if a GPRS connection is available, this connection is generally much more expensive than available wireless connections. A further example is the case where although currently disconnected, with connections available, the user may be aware that cheaper or more convenient connection resource will soon be available, i.e., something that cannot be anticipated in a generalized manner by the adaptable middleware platform. For these reasons, it is imperative that the added potential of the user’s own resources, preferences, and intelligence are exploited.
Reflective Middleware
Principals and Key Ideas
A reflective computational system is one that reasons about its own computation. This is achieved by the system maintaining a representation (metadata) of itself that is causally connected to its own operation, so that if the system changes its representation of itself, the system adapts [22]. With behavioral reflection in an
object-oriented system, the reflective system reasons about and adapts its own behavior by associating meta objects with the objects in the application, where the meta objects control or adapt the behavior of the application objects [12]. In a reflective system, the communications between the meta objects and base objects take place through a set of well-defined interfaces, referred to as that system’s meta object protocol (MOP) [20].
Case Studies of Reflective Middleware
Although a number of reflective middleware frameworks are discussed in detail in previous chapters, this section discusses two additional reflective systems which target middleware for dynamic adaptation. In addition, a number of systems described later in this chapter make use of reflective techniques, but are discussed under a different category.
ACT
ACT [35, 36] is a generic adaptation framework for CORBA compliant [25] ORBs that supports unanticipated dynamic adaptation. When the ORB is started ACT is enabled by registering a specific portable request interceptor [25], intercepting every remote invocation request and handing them to a set of dynamically registered interceptors. These dynamically registered interceptors can be added in an unanticipated manner. Rule-based dynamic interceptors allow the request to be redirected to a different source or handed to either a number of local proxy components exporting the same interface as that of the destination server component [35] or to a generic local proxy component [36]. This generic proxy component can also be dynamically created in an unanticipated manner. This proxy in turn can request a rule-based decision making component, which can incorporate an event service, to either perform the invocation, or change parameters and forward the request to either its original destination or a different destination.
A prototype is described whereby the Quality Objects (QuO) framework [2], an aspect-oriented QoS adaptation framework for CORBA ORBs, was used with a CORBA-compliant ORB, to support completely unanticipated runtime aspect weaving in the ORB. A number of management interfaces were also provided to manage the runtime registration of new rule-based dynamic interceptors, and the addition of new rules to these interceptors.
Correlate
Presented by the DistriNet research group in Katholieke Universiteit Leuven, Correlate [16, 33, 34, 40], is a concurrent object-oriented language based on C++
(and later Java) to support mobile agents. It has a flexible runtime engine to support migration and location independent inter-object communication. Each agent object has an associated meta object that can intercept creation, deletion, and all invocation messages for the object. This system allows non-functional aspects of the application to be separated from the application object. The non-functional behaviors are designed to be largely application independent; however, independent policy objects can be defined to contain application-specific information to assist in the operation of these meta-level non-functional behaviors. The meta-level system was initially used to implement non-functional concerns such as real-time operation, load-balancing, security, and fault tolerance. Later this system was used to customize ORBs, using application-specific requirements, as an adaptable graph of meta-level components that could be extended or adapted at runtime.
The application-independent non-functional behaviors are implemented as meta object classes, which can interact with the base program to adapt its operation using a message-based MOP. These meta object classes define a set of possible property values in a policy template. Each application class has an associated singleton policy-class object, which is an instantiation of these templates and contains application-specific information. These singleton policy-class objects are consulted by the meta-level before performing the non-functional behaviors of the application, allowing the operation to be customized in an application-specific manner.
However, this policy system is limited since policy templates are imposed at the time the meta program is written. These templates, written in a declarative language, must fully define what possible customizations an application may require at a later stage. The policies, also written in the same declarative manner, select values for template properties according to the application class with which they are associated. These templates cannot be changed, so adaptation in response to unanticipated requirements cannot be fully handled. Policies are written before runtime by a system integrator, and these policies are then translated to code and compiled with the application and so cannot be changed at runtime. Unanticipated forms of dynamic adaptation cannot be achieved in this architecture as the meta-level programmer and template designer needs complete a priori knowledge of the possible changes in context values that may occur, and also the set of customizations from which the meta-level can choose is fixed at compile time.
**Discussion**
The use of reflective mechanisms for adaptable middleware is an old yet active research area. The main issue with reflection for the adaptation of middleware lies not with the use of reflection to adapt the structure, behavior, or architecture of middleware but rather with how the application of those adaptations are controlled and managed. This issue is of particular importance if the adaptation is required in response to an
unanticipated change in the state, requirements, or context of the users, applications, or environment.
Aspect-Oriented Approaches to Dynamic Adaptation
Principal and Key Ideas
Aspect-oriented programming (AOP) [13, 21] is a programming methodology that allows cross-cutting concerns to be declared as “aspects”. A cross-cutting concern is a property or function of a system that cannot be cleanly declared in terms of individual components, because the application of the cross-cutting concern must be scattered or distributed across otherwise unrelated components. AspectJ [42], the de-facto standard for AOP, introduced the concept of an aspect as a language construct, used to specify a modular unit to encapsulate a cross-cutting concern, which is then “woven” into the application code at compile time. An aspect is defined in terms of “pointcuts” (a collection of “join point” locations within the application code where the aspect should be “woven”, and conditional contextual values at those join points), “advice” (code executed before, after, or around a join point when it is reached), and “introductions” (Java code to be introduced into base classes) [42].
AOP supports the production of these aspects in a manner that is separate or “oblivious” [13] to the application components, into which the aspects are later incorporated or woven at a specified or quantified set of join points. “Obliviousness”, one of the key components of AOP, refers to the degree of separation between the aspects of the system and how they can be developed independently without preparation, cooperation, or anticipation. Most AOP systems support weaving before runtime, but newer dynamic AOP systems (e.g., Wool and PROSE) described in this section allow aspects to be woven at load-time or runtime, thereby allowing the incorporation of aspects into base programs to remain unanticipated until load-time or runtime.
Case Studies of Dynamic Aspect-Oriented Systems
Wool
Wool [38] is a dynamic AOP framework that uses a hybrid aspect weaving approach by using both the Java Platform Debugger Architecture (JPDA), and the Java HotSwap mechanism [39]. Since JPDA supports remote activation of breakpoints at runtime, join point hooks in the form of debugging breakpoints can be dynamically set from outside of the application. A pointcut may be made up of a number of these hooks. Each aspect specifies a pointcut, and a set of advices to be executed when one of the pointcut’s join points (represented as breakpoints) is reached.
New aspects can be serialized and sent to the target JVM for weaving at any pointcut. In one approach, when a join point is encountered, the inserted breakpoint redirects the operation to the Wool runtime component in a manner similar to a debugger, where advices are then executed. The alternative approach allows the advice to be hotswapped into the application class thereby improving performance if the join point is encountered repeatedly. This is achieved by using Javassist [7] to rewrite the class, without access to its source code, and have the adapted class replace the original application class using the Java HotSwap mechanism. This also removes the breakpoint, so calls to the debugger are removed. However, this mechanism means that all objects of the woven class will have the adaptation incorporated, and so individual objects cannot be adapted. Currently the aspect programmer must specify in the aspect’s source code whether the advice should be woven by the HotSwap mechanism or by the debug interface, so in order to achieve good performance, the aspect writer should anticipate the access patterns of the aspect’s pointcut. Wool does not support adding introductions but a proposed solution is provided.
**PROSE**
PROSE [26, 29] is another dynamic AOP framework for Java that supports runtime aspect weaving. PROSE was originally intended as a framework for debugging or rapid prototyping of AOP systems, which could later be completed using compile-time or load-time aspect weaving [29]. This was mainly due to its use of the Java Virtual Machine Debug Interface (JVMDI) [39], which resulted in a large performance penalty. A later version of PROSE [26] was implemented by modifying an open source JVM, greatly improving its performance. In both versions, new aspects can be to dynamically woven, with support for these aspects to define new join points, for which new interception hooks are created at weave time, thereby allowing PROSE to be used to support dynamic adaptation by weaving additional non-functional behaviors into the code at runtime. A number of graphical user interfaces are included to manage the unanticipated weaving of new aspects at runtime. However, like Wool above, PROSE only supports weaving at a class level; therefore individual objects cannot be adapted individually.
MIDAS [27], implemented as a Spontaneous Container [28], is a middleware for the management of PROSE extensions which provides a distributed event-based system for the dissemination and management of aspects from a central server to mobile computers based on their location.
**TRAP/J**
TRAP/J [37] is a prototype unanticipated dynamic adaptation framework for Java. It combines compile-time aspect weaving using AspectJ [42] and unanticipated
dynamic adaptation with wrapper classes and delegate classes. At compile time the programmer selects a subset of application classes that will be adaptable. The TRAP/J system then automatically creates AspectJ code to replace all instantiations of the selected classes with wrapper class instantiations. Java code for each wrapper class and a meta object class for that wrapper class is also automatically created. At runtime, each instantiated wrapper object has an instance of the original wrapped object and a meta object bound to it. These wrapper objects redirect all method calls to their meta object, which in turn act as placeholders for a set of delegate objects that may handle the invocation of the method, or adjust its parameters prior to execution by the original wrapped object. New, dynamically created delegates can be added or removed at runtime via an RMI [39] interface using a management console. These delegates can be added on a per object basis since the meta objects can supply a name for each instance and register it in an RMI registry.
This framework was used to demonstrate the dynamic adaptation of a network-enabled application by replacing instances of the `java.net.MulticastSocket` class with instances of an adaptable socket class `MetaSocket` [18]. The TRAP/J framework however does not support completely unanticipated dynamic adaptation. The adaptation, its intelligent and controlled dynamic application, and the timing of its application all remain unanticipated until runtime, but the possible locations for the adaptations are specified in the application source code, since the version of AspectJ used requires access to the application source code. Despite improving the performance of the TRAP/J framework, this restriction greatly limits the nature of the unanticipated dynamic adaptations that can be applied. No information is provided about whether the generated meta object class code can be modified prior to compilation and weaving.
In addition, TRAP/J seems to delegate the invocation of the method to only one delegate; the first one it finds implementing the method, but this ordering of delegates can be configured. This means that only one adaptation can be applied at a time since adaptation behaviors are not automatically composed. In addition, TRAP/J does not seem to allow the user to apply an easily recognizable name to the base object being adapted, and so may make it difficult for the user to identify the object to which adaptations should be dynamically applied. From the documentation TRAP/J does not seem to support applying dynamic adaptations via new delegates on a structured class-wide or interface-wide basis since RMI registry lookups are at a per meta object basis. Unlike Wool and PROSE above, which only support the adaptation of classes, TRAP/J only supports the adaptation of individual objects at any one time.
Discussion
Dynamic AOP technologies would appear to be a promising area of research for dynamically adaptable middleware. Not only can aspects be used to implement
non-functional concerns within the middleware but also to adapt or augment the functional behavior of the middleware [21]. This ability to dynamically adapt functionality or inject new functionality at clearly defined join points is of particular importance to middleware for mobile computing since dynamic and possibly unanticipated adaptation requirements are typical for mobile computing. The “separation of concerns” model of aspects reduces the difficulty of incorporating adaptations into complex middleware frameworks since the introduced cross-cutting concerns can be targeted correctly to the location requiring adaptation.
However, current dynamic AOP methodologies such as Wool, PROSE, and TRAP/J are lacking a structured mechanism to dynamically specify these locations for dynamic adaptation, and how these adaptations should be applied, after the target software has started execution in a manner that incorporates user, application, and environmental context at runtime. Despite this, this area of dynamic AOP based dynamic adaptation of middleware is proving to be an active area of research and should quickly provide a number of solutions to this issue.
Policy-Based Management of Dynamic Adaptation
Principals and Key Ideas
Many traditional adaptable systems are composed of a single adaptation manager that is responsible for the entire adaptation process; i.e. monitoring, adaptation selection intelligence and performing the actual adaptation. Since the intelligence to select appropriate adaptations and the mechanism to perform these adaptations are embedded directly within the adaptation manager, this type of system becomes inflexible and inappropriate for general use. By decoupling the adaptation mechanism from the adaptation manager, and removing the intelligence mechanism that selects or triggers adaptations, the adaptation manager becomes more scalable and flexible. Policy specifications maintain a very clean separation of concerns between the adaptations available, the adaptation mechanism itself, and the decision process that determines when these adaptations are performed.
Policy specification documents are usually persistent text-based declarative representations of policy rules that ideally can be read, understood, and generated by users, programmers, and applications. A policy rule is defined as a rule governing the choices in behavior of a managed system [8]. Informally, a policy rule can be regarded as an instruction or authority for a manager to execute actions on a managed target to achieve an objective or execute a change.
An adaptation policy rule is usually made up of an event specification that triggers the rule, which is often fired as a result of a monitoring operation; an action to perform in response to the trigger; and a target object that is part of the managed system upon which that action is performed [8]. Many policies will also contain some restrictions
or guards confining the rule action to appropriate occasions. This event-condition-action (ECA) format is standard for rule-based adaptation systems \cite{4, 5, 6, 8, 9, 16, 19, 33, 34, 35, 36, 40}, where an adaptation management system is responsible for monitoring these events, evaluating the conditions and initiating the management action on the targeted managed object. In a policy-based dynamic adaptation system it should be possible to edit the rule set and have them re-interpreted to support the dynamic addition of new rules or changes in policy.
**Case Studies of Policy-Based Middleware**
This section discusses two systems that employ policy-based management techniques to manage dynamic adaptation of middleware, but additionally the ACT, TRAP/J, and Correlate systems could also be described in terms of their use of policy rule-based techniques. A number of mechanisms discussed in other chapters could also be discussed in terms of their use of rule-based management mechanisms.
**RAM**
RAM (Reflection for Adaptable Mobility) \cite{4, 9} from École des Mines de Nantes, takes the approach of completely separating functional and non-functional aspects of an application in a manner related to aspect oriented programming (AOP). Using this separation of concerns approach, only the core application functionality is inserted into the application code, with all middleware services represented as non-functional concerns. *Container* meta objects wrap each application, and supports the compositions of other meta object which implement these non-functional concerns. The wrapping of application objects with *Containers* occurs at either load-time using Javassist \cite{7} in \cite{4} or at compile-time using AspectJ \cite{42} in \cite{9}. These meta objects provide the middleware services by selecting appropriate *RoleProvider* objects for each service, i.e., the meta objects that provide the actual implementations of the services. Adaptation can occur by adding, removing, or reordering these *RoleProviders*.
RAM also provides a resource manager, whereby the system maintains a tree of *MonitoredResource* objects, which describe a contextual resource or group of resources. These *MonitoredResource* objects are updated by *probe* objects that actively monitor the environment. *MonitoredResource* objects can be queried explicitly or alternatively by requesting change notifications to signal the adaptation engine when an interesting resource change occurs. The *Container* meta objects, that wrap each application component, can also expose the *MonitoredResource* interface, supporting queries of application context as resources, thereby exploiting application-specific knowledge in the adaptation process.
The set of meta objects (aspects) to use in each *Container* is adapted at runtime by means of an adaptation engine that uses both an application policy
and a system policy, both written in a declarative Scheme-like language, and which are both passed to the adaptation engine when the application is started. The application policy defines pointcuts (a dynamic set of join points, i.e., \textit{Container} objects) in the application, and the named non-functional aspects to be used at these pointcuts, in an application-aware but resource-independent manner. The set of rules that determine which join points make up a pointcut is also specified in the application policy, but these rules are dynamically evaluated, so this set of join points can change dynamically. The non-functional aspects woven at these pointcuts are defined in the system policy in an adaptive Condition-Action model, where sets of application-independent but resource-aware conditions are dynamically evaluated to decide which meta objects will implement the non-functional aspect. When the conditions are dynamically evaluated, the bindings of meta objects can be changed, in a manner similar to dynamic aspect weaving. Therefore, the set of join points that make up a pointcut, and the set of meta objects that implement an aspect can both be dynamically specified according to the rules in the policies. The current system does not support dynamic changes to the policies, and so cannot support unanticipated adaptation management logic; however this is planned for future versions.
In most cases where AspectJ is used, access to the source code of the application is also required. A version of RAM suggests using a configuration file to specify the set of join points that can be used, and use AspectJ to create these join points at compile time rather than have \textit{Containers} wrap every application object [11]. This means, however, that all possible locations for adaptation must be anticipated at compile time, and requiring access to the source code of the application. Preliminary designs for an adaptation framework extending RAM, which would possibly support completely unanticipated adaptation by allowing dynamic specification of policies and dynamic selection of adaptation locations, is presented in [10], but this system has yet to be implemented.
CARISMA
Research carried out at University College London on the CARISMA project [5, 6] presents a design for peer-to-peer middleware based on service provision. Each node can export services and possible different behaviors or implementations for those services. Services can be selected according to user and application context information, as specified in an “Application Profile”, an XML policy document. Embedded in this application profile is the application-specific information that the middleware uses when binding to these services, e.g., which service behavior to use in response to changes in the execution context. The middleware is responsible for maintaining a view of the system environment by directly querying the underlying network-enabled operating system. Applications may request to view and change
their profiles at runtime, thereby adapting the middleware as application-specific and user-specific requirements of change dynamically.
This system also provides the ability for the application to be informed by the middleware of changes in specific execution conditions, supporting the development of resource-aware applications. This system is based on the provision of multiple implementations of the same service with different behaviors, in a manner similar to the Strategy Design Pattern rather than adapting the service itself. The primary contribution of this work focuses on the identification and resolution of profile conflicts [6], and not on the actual provision of an adaptable middleware implementation. No information is provided about how the services are implemented, if they can be dynamically loaded, how they implement their different strategies, or if these strategies can be expanded at runtime. However, it should be noted that the application profile that controls how the system adapts, and the mechanism for profile conflicts, can both be adapted at runtime in an unanticipated manner.
XMIDDLE [23], which appears to form the basis for CARISMA, is a peer-to-peer data sharing middleware for mobile computing. In XMIDDLE, data is replicated as XML trees pending disconnections, with these trees reconciled when possible in a policy-based manner according to application specific conflict resolution data embedded in the shared data structures.
**Benefits of Policy-Based Management of Dynamic Adaptations**
An adaptable system that has its adaptation logic encoded directly into it cannot operate in a general-purpose manner or adapt in response to unanticipated changes, as often arises with an enabling technology such as middleware operating in an environment where the operating context changes erratically, as seen in a mobile computing environment. The use of a policy-based control model allows the clean decoupling of adaptation logic from the adaptation mechanism used by the adaptation framework.
The control logic to manage the dynamic application of an adaptation must be capable of specifying what adaptation should be applied, where and when it should be applied, and conditions to restrict the application of the adaptation if necessary. Since many dynamic adaptations are necessarily required because some state, resource, or requirement has changed for the user, application, or execution environment, this dynamically specified control logic must also support the querying of this runtime context. Using dynamic loading and interpretation of policy directives can also be used to support the management of new unanticipated adaptations, by allowing those new adaptations to be referred to dynamically, along with where they should be applied and what management logic should be used to control how and when those adaptations are applied.
Chisel and ALICE: A Policy-Based Reflective Middleware for Mobile Computing
This section describes the Chisel Dynamic adaptation framework, and how it can be used with ALICE, a middleware for mobile computing, to create a dynamically adaptable middleware, which can be used to adapt a standard network application in an unanticipated manner to operate in a mobile computing environment.
Chisel
The Chisel dynamic adaptation framework [19], developed in Trinity College Dublin, supports the application of arbitrary completely unanticipated dynamic adaptations to compiled Java software, as it runs. An adaptation is “completely unanticipated” if the behavioral change contained in the adaptation, the location at which that adaptation is to be applied, the time when that adaptation will be applied, and the control logic that controls the application of the adaptation, can all remain unanticipated until after the target software has started execution [19].
The adaptations are achieved by dynamically associating Iguana/J metatypes [31, 32] with any application object or class and so changing their behavior on the fly, without regard to the type of the object or class, and indeed without access to its source code. The metatype of a class or object represents some coherent internal behavior change from its original source code behavior [31], i.e., a behavioral change associated with the class or object. In Iguana/J metatypes are implemented using custom MOPs, i.e., by deciding which parts of the object model to reify, writing a set of meta object classes for these reifications to implement the new metatype behavior, then associating that metatype implementation with an object or class. In the Iguana literature, the terms “metatype association” and “MOP selection” are similar and refer to this association of MOP implementations to objects and classes. This association mechanism is performed using runtime behavioral reflection techniques, whereby selected parts of application objects and classes are reified and intercepted, and the new metatype behavior inserted at this interception point. Iguana/J supplies the framework to instantiate these meta objects to reify the object model, and correctly order metatypes if more than one is selected. Iguana/J provides a mechanism to associate new metatypes with objects and classes at runtime, thereby changing the behavior to the system on the fly.
The execution of a new behavior embedded in the meta level can then occur alongside or around the original behavior of the target object, by wrapping the behavior of the target object and adapting or tailoring the intercepted operation, or by introducing the new behavior before, after, or instead of the intercepted operation. New metatypes can be defined at any time and compiled offline using the Iguana/J metatype compiler, even as a target application is running. In this way the adaptations...
to be applied can remain unprepared and unanticipated until it is needed. When a metatype is associated with a class the behaviors that are changed are both the “static” behaviors of the class, the behaviors of each current and future instance of the class, and the behavior of all subclasses and their current and future instances. Here static refers to the behavior and data embedded in a class, instead of in each of its instances. For example, static methods, static data fields, and class initialization procedures, implemented using the static keyword in Java and C++.
The dynamic associations of these metatypes are driven by a dynamically specified and interpreted policy script. Using this policy script, the user can specify which classes or named objects should be adapted, either in a proactive manner, or in a reactive event-based manner. The Chisel policy language, described in detail in [19], also supports the dynamic definition of new event types for use in reactive rules. In addition, the Chisel policy language allows events to be dynamically fired by other rules or in response to changes in dynamically specified contextual conditions. In this manner, the timing and control logic for any dynamic metatype association can remain unspecified until during runtime, and so remain unanticipated. By dynamically creating a new policy, specifying which class or object to adapt, and specifying which named metatype to associate, the location of the adaptation can also remain unanticipated until during runtime.
Together, this use of runtime behavioral reflection and runtime specification and interpretation of adaptation policies, allows the Chisel framework to support the completely unanticipated dynamic adaptation of any running Java application, without stopping it, and without access to its source code.
**ALICE**
ALICE [3, 15, 41], also developed in Trinity College Dublin, is an architectural middleware framework that supports network connectivity in a mobile computing environment by providing a range of client/server protocols (Figure 15.1). In ALICE, “mobile hosts” are mobile devices, which may interact with fixed computers.

called “fixed hosts”. These connections are tunneled through “mobility gateways”,
which are also fixed computers. The mobile host can become disconnected from
a mobility gateway and later become reconnected to a different mobility gateway
without interfering with the virtual connection to the fixed host.
The ALICE mobility layer handles communications between devices by overriding
socket functions while hiding which communication interface is being used for the
connection. The mobility layer tracks available connections and picks one using
a reconfigurable selection algorithm. When a disconnection occurs, the ALICE
mobility layer will synchronously queue unsent data between the mobile host and
the mobility gateway until a connection is re-established.
For this case study a full Java implementation of the ALICE mobility layer was
completed, based on the work presented in [41]. It provides a class MASocket
that contains the ALICE connection behavior, which implements a socket inter-
face similar to the standard Java socket class, java.net.Socket. When the
MASocket class is used instead of the standard Java socket, all messages from a
mobile host to a fixed host are redirected via a mobility gateway. When the con-
nection between the mobile host and the mobility gateway breaks, all network data
is cached at the mobile host and the mobility gateway for later reconnection. This
disconnection and reconnection happens without the application being made aware
of the disconnection.
**Chisel and ALICE**
To demonstrate the use of the Chisel dynamic adaptation framework, an off-the-
shelf application, “The Java Telnet Application/Applet” [17], was adapted to operate
in a mobile computing environment by dynamically adapting it to use the ALICE
mobility layer, all without stopping the application and without changing or requir-
ing access to the source code of the application in any way. The only initial assumption
made about the internal programming of the application was that a standard Java
socket, or a subclass of java.net.Socket, is used to connect the client and the
telnet server, a reasonable assumption for any network enabled Java application.
A metatype, DoAliceConnection was developed to intercept the creation
of the socket connection to the telnet server and replace the socket in use with
an instance of the ALICE MASocket. The metatype definition below specifies
that the reified creation of objects should be intercepted and handled by the
MetaObjectCreateALICEConn metaobject class.
```plaintext
protocol DoAliceConnection {
reify Creation: MetaObjectCreateALICEConn();
}
```
This redirection behavior was embedded in the meta object class
MetaObjectCreateALICEConn as shown below. This redirection behavior
Mobile Middleware
is achieved by intercepting the creation of all socket objects, and if the connection is a not localhost connection or one used by ALICE, then by the use of the Java reflective API the java.net.Socket constructor is replaced by the MASocket constructor. The application would be completely unaware of the change since the returned MASocket is extended from java.net.Socket and exposes the same interface.
class MetaObjectCreateALICEConn extends ie.tcd.iguana.MCreate {
public Object create(Constructor cons, Object[] args) ... {
if(/*not a localhost connection, or a connection used by ALICE */){
//Change the constructor, from java.net.Socket to MASocket
cons = (Class.forName("MASocket")).getConstructor(...);
}
Object result = proceed(cons, args); /* create the socket */
return result;// result is either a normal socket or an MASocket
}
}
This adaptation was then applied to the telnet application in a number of ways using the Chisel policy language [19]. One method was to apply this adaptation in a context-aware manner, i.e., only perform the metatype association if the application was being used in a mobile computing environment, where the network connection was known to be intermittent. In the adaptation policy rules seen below, the DoAliceConnection metatype is only associated with the java.net.Socket class if the UsingDodgyNet event fires. When the connection moves to a stable network connection the UsingGoodNet event is fired, thereby re-enabling the use of standard Java sockets.
ON UsingDodgyNet java.net.Socket.DoAliceConnection
The event UsingDodgyNet could be fired automatically by the Chisel event manager using an automatic rule definition and trigger rule, or by the Chisel context manager when a wireless connection was detected, or by the user using another event manipulation policy rule, etc. Similarly, the UsingGoodNet event could be fired when the network connection is deemed stable, or by another policy rule, by some network monitoring code, or by the context manager. In [19], a number of methods are presented to describe how these events could be defined and automatically triggered in an unanticipated manner.
**Findings and Further Adaptations**
This case study was fully implemented and functions as expected. This case study demonstrates the use of the Chisel dynamic adaptation framework to adapt an
arbitrary application in a context-aware manner for use in a mobile computing environment, without accessing its source code. The telnet application was not prepared in any way to have the particular adaptation applied. Only when the adaptation was deemed necessary did the user need to create a set of adaptation rules, similar to the ones above, embedding any necessary context information. Only when these rules triggered the application of the adaptation would the adaptation be needed so it could be loaded and applied to the unprepared location deep inside the compiled application, without any requirement to change, interrupt, or restart the application. This case study also demonstrates how the operation of a complex compiled application was changed dynamically according to the environment and user’s needs.
Using the Chisel framework further adaptations are also made possible, to both the application and the ALICE middleware framework. This mechanism of dynamically redirecting Java socket connections to ALICE MASocket socket connections could also be used to dynamically adapt the Java RMI middleware model similar to the approach discussed in [3, 15], but in an unanticipated manner. This possible approach could enable the adaptations described in [3], by intercepting the instantiation of both the java.net.Socket and sun.rmi.server.UnicastRef classes. An alternative approach could intercept the operations of the java.rmi.server.RMISocketFactory interface when it is requested to create the actual sockets used to perform remote object invocations, as described in [41].
Although a mobile computing scenario was chosen to demonstrate the Chisel dynamic adaptation framework, this case study equally applies to any environment or operation mode where unanticipated dynamic adaptation is required for satisfactory operation. A mobile computing environment was seen as a perfect example since the state, resources, and requirements of the application, the environment, and the user can all change to extreme values in an unanticipated manner.
Conclusions
This chapter has presented a discussion of dynamic adaptation for mobile middleware. The chapter began with a discussion of how unanticipated dynamic adaptation of applications and middleware are required in a mobile computing environment. A number of reflective and aspect-oriented techniques for dynamic adaptation were discussed, paying particular attention to support for unanticipated dynamic adaptation. The chapter then discussed the use of policy-based management to control unanticipated dynamic adaptation in a manner that was itself dynamically adaptable. The chapter then continued with an introduction to the Chisel dynamic adaptation framework. Chisel was then discussed in terms of how ALICE, a middleware for mobile computing, could be used to adapt an off-the-shelf network application to operate in a mobile computing environment in a completely unanticipated manner.
References
|
{"Source-Url": "http://www.tara.tcd.ie/bitstream/handle/2262/38793/MobileMiddleware_Keeney.pdf;sequence=1", "len_cl100k_base": 8371, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 42959, "total-output-tokens": 11606, "length": "2e13", "weborganizer": {"__label__adult": 0.0002543926239013672, "__label__art_design": 0.00021791458129882812, "__label__crime_law": 0.00018727779388427737, "__label__education_jobs": 0.0003743171691894531, "__label__entertainment": 4.7087669372558594e-05, "__label__fashion_beauty": 0.00011366605758666992, "__label__finance_business": 0.00015544891357421875, "__label__food_dining": 0.00021338462829589844, "__label__games": 0.0003445148468017578, "__label__hardware": 0.000682830810546875, "__label__health": 0.00026535987854003906, "__label__history": 0.00017940998077392578, "__label__home_hobbies": 4.51207160949707e-05, "__label__industrial": 0.00022494792938232425, "__label__literature": 0.0001970529556274414, "__label__politics": 0.0001882314682006836, "__label__religion": 0.0003170967102050781, "__label__science_tech": 0.00814056396484375, "__label__social_life": 4.905462265014648e-05, "__label__software": 0.0064544677734375, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00019478797912597656, "__label__transportation": 0.0003559589385986328, "__label__travel": 0.00016057491302490234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52966, 0.01505]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52966, 0.55495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52966, 0.90305]], "google_gemma-3-12b-it_contains_pii": [[0, 910, false], [910, 3439, null], [3439, 6242, null], [6242, 8700, null], [8700, 11807, null], [11807, 14336, null], [14336, 17107, null], [17107, 20169, null], [20169, 23108, null], [23108, 26007, null], [26007, 29026, null], [29026, 31914, null], [31914, 34833, null], [34833, 37068, null], [37068, 39826, null], [39826, 42317, null], [42317, 45281, null], [45281, 48249, null], [48249, 51164, null], [51164, 52966, null]], "google_gemma-3-12b-it_is_public_document": [[0, 910, true], [910, 3439, null], [3439, 6242, null], [6242, 8700, null], [8700, 11807, null], [11807, 14336, null], [14336, 17107, null], [17107, 20169, null], [20169, 23108, null], [23108, 26007, null], [26007, 29026, null], [29026, 31914, null], [31914, 34833, null], [34833, 37068, null], [37068, 39826, null], [39826, 42317, null], [42317, 45281, null], [45281, 48249, null], [48249, 51164, null], [51164, 52966, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52966, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52966, null]], "pdf_page_numbers": [[0, 910, 1], [910, 3439, 2], [3439, 6242, 3], [6242, 8700, 4], [8700, 11807, 5], [11807, 14336, 6], [14336, 17107, 7], [17107, 20169, 8], [20169, 23108, 9], [23108, 26007, 10], [26007, 29026, 11], [29026, 31914, 12], [31914, 34833, 13], [34833, 37068, 14], [37068, 39826, 15], [39826, 42317, 16], [42317, 45281, 17], [45281, 48249, 18], [48249, 51164, 19], [51164, 52966, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52966, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
75e1f2de3609649ab753a92217227681d305bace
|
Automatic generation of parallel and coherent code using the YAO variational data assimilation framework
Luigi Nardi, Julien Brajard, Sylvie Thiria, Fouad Badran, Pierre Fortin
To cite this version:
Luigi Nardi, Julien Brajard, Sylvie Thiria, Fouad Badran, Pierre Fortin. Automatic generation of parallel and coherent code using the YAO variational data assimilation framework. 2016. <hal-00783328v2>
HAL Id: hal-00783328
http://hal.upmc.fr/hal-00783328v2
Submitted on 20 Jun 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Automatic generation of parallel and coherent code using the YAO variational data assimilation framework
Luigi Nardi\textsuperscript{1,2}, Julien Brajard\textsuperscript{1}, Sylvie Thiria\textsuperscript{1}, Fouad Badran\textsuperscript{2} and Pierre Fortin\textsuperscript{3}
\textsuperscript{1} LOCEAN, Laboratoire d’Océanographie et du Climat: Expérimentations et approches numériques. UMR 7159 CNRS / IRD / Université Pierre et Marie Curie / MNHN. Institut Pierre Simon Laplace. 4, place Jussieu Paris 75005, France.
\textsuperscript{2} CEDRIC, Centre d’Etude et De Recherche en Informatique du CNAM. EA 1395, 292 rue St Martin Paris 75003, France.
\textsuperscript{3} Sorbonne Universités, UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris, France CNRS, UMR 7606, LIP6, F-75005, Paris, France
01/07/2016
Abstract
Variational data assimilation estimates key control parameters of a numerical model to minimize the misfit between model and actual observations. YAO is a code generator based on a modular graph decomposition of the model; it is particularly suited to generating adjoint codes, which is the basis for variational assimilation experiments. We present an algorithm that checks the consistency of the calculations defined by the user. We then present how the modular graph structure enables an automatic and efficient parallelization of the generated code on shared memory architectures avoiding data race conditions. We demonstrate our approach on actual geophysical applications.
1 Introduction
Numerical models are widely used for studying physical phenomena. Most of the time, models are used to forecast or simulate the evolution of a phenomenon. Since a model is imperfect, discrepancy between its forecast values and the so-called “reality” may be significant due to model parametrizations, numerical
discretization, and uncertainties on initial and boundary conditions. Observations, either in-situ or remote-sensing using radars or satellite, give an accurate measurement of physical variables of interest. However they are not error free, for example due to noise of the sensor measurement, and they are in general sparse in time and space.
Data assimilation is a theoretical framework that can blend imperfect knowledge from a numerical model and imperfect measurements from an observational system to give an optimal estimate of control parameters (initial conditions, parameters). In this work we focus in particular on variational data assimilation [1], also known as 4D-VAR. This class of methods are widely used in various contexts, e.g. meteorology [2], oceanography [3], and in particular for full three dimensional models. 4D-VAR is based on a minimization with respect to control parameters of a cost function $J$ which measures the misfit between the direct numerical model outputs and the observations. The minimization is performed using a gradient method, which requires calculating the gradient of $J$ as a function of the control parameters. The gradient computation requires the product of the transposed Jacobian matrix of the direct numerical model with the derivative vector of $J$ defined at the observation points. This product is computed through a numerical model, the so-called adjoint model. Since the direct numerical model is usually very complex, the implementation of the programming code which represents the adjoint model is often a real issue.
The YAO framework, already presented in [4, 5], is a code generator dedicated to variational data assimilation. With the YAO domain-specific language (DSL), the user defines, using specific directives and C programming, the specifications of the numerical model. It then automatically generates the numerical and the adjoint model codes via C++ object-oriented programming. In practical, if an implementation (e.g. using Fortran or C) of the numerical model already exists, the user may have to recode a non negligible part of the original code. Nevertheless, in the actual YAO applications, it has been noticed that the overhead of the implementation in the YAO formalism is far less than the cost needed to implement the adjoint model from scratch. YAO has already been used with success on several actual applications in oceanography: Shallow-water [4, 5], Marine acoustics [6, 7], Ocean color [8], PISCES [9] and the GYRE configuration of NEMO [10]. YAO is distributed under the free software license CeCILL. Documentation and download are available [11].
Numerical models are based on a discretization of the computational space and apply a number of basic functions at these points. Using YAO the user defines the computational space, the basic functions and their interdependencies. The YAO formalism is based on a dependence graph called a modular graph, which is similar to those used in automatic parallelization of nested loops [12]. The traversal of the modular graph allows us to perform all the basic calculations of the numerical model. The user describes, using YAO-specific directives, a traversal of the graph in the form of nested loops, which must be consistent with the different dependencies defined by the modular graph. This task is not trivial for a complex numerical model, thus it is important to check the coherence of the directives defined by the user for a traversal. We present in
this paper an algorithm which allows YAO to automatically check the coherence of a traversal and to detect inconsistencies.
In the field of automatic parallelization of nested loops, several concepts and algorithms have been introduced; these algorithms enable the analysis of nested loops, as well as their decomposition and fusion [12, 13, 14]. The decomposition obtained is well suited to a multi-thread parallelization on shared memory architectures where no communication is required. In this paper, we also show how the YAO modular graph enables us to integrate and adapt these algorithms in order to identify the available parallelism, and to allow the automatic generation of parallel code with YAO while completely avoiding the data race conditions (write/write conflicts). With OpenMP directives, it is then possible to generate a multi-threaded parallel code that runs efficiently on shared memory architectures. This is an important improvement over the previous version of YAO [3], which can generate only sequential code. A large community in geophysics may thus automatically and transparently exploit decades of research in automatic parallelization and benefit from important speedups in computation times on multicore architectures without any additional effort and without any knowledge of parallel programming.
For the automatic generation of parallel code, the development of algorithms specific to YAO is necessary. Indeed, the existing software tools for automatic parallelization with OpenMP directives have specific constraints related to their design, and can therefore currently not be integrated in the YAO generator. For example, the CAPO toolkit [15] supports only Fortran and relies on user interaction to improve the parallelization process. The Gaspard2 framework [16] enables automatic OpenMP code generation, but the available parallelism must first be specified by the user in a UML model. The PLuTo tool [17] can efficiently parallelize nested loops while taking into account, via tiling, data locality on multicore architectures with complex hierarchical memory. However PLuTo does not currently support object-oriented programming for input source codes and it has specific limitations, e.g. only SCoP programs with pure function calls, no dynamic branch conditions, which also prevent a direct integration into YAO. Finally and most importantly, as detailed below there are data race conditions (write/write conflicts) in the generated code. These data race conditions prevent any automatic parallelization from such tools according to their own data-dependency analysis. To our knowledge, none of these tools can automatically insert, for example, OpenMP *atomic* directives to avoid these race conditions and thus enable parallelization. We here show how to efficiently accomplish this in YAO thanks to its modular graph.
Adapting state-of-the-art algorithms to YAO while relying on its modular graph also has several advantages. First, there is no additional constraint on the application code written by the user. Second, a high-level dependency graph is directly provided by the modular graph, which enables us to avoid data-dependency analysis, to naturally obtain a coarse-grain parallelism, and to possibly scale on real-life applications with thousands of statements.
This paper extends a previous work [18] with two additional contributions. First, we introduce a new algorithm, thereafter referred to as the *coherence*
algorithm, allowing us to detect inconsistencies in the user-defined YAO modular graph traversal. Second, the coherence and the parallelization algorithms are tested on the European reference model for global oceanography forecasting (NEMO, Nucleus for European Modelling of the Ocean), which demonstrates that the proposed framework and algorithms are already operational.
In the following section, we will give a brief overview of the YAO framework. Section 3 introduces the coherence algorithm. Then, in section 4, we will show how the modular graph can be used to automatically and efficiently parallelize the generated code on shared memory architectures. Performance results for three actual YAO applications on a multicore CPU are detailed in section 5, including the NEMO application. Finally, in section 6 concluding remarks are presented and future work is discussed.
2 YAO overview
2.1 The modular graph
We present here the concept of a modular graph, which is fundamental in YAO, as well as the forward and backward procedures: more details can be found in [4, 5]. We first define the following terms:
- A module is an entity of computation; it receives inputs from other modules or from an external context\(^1\) and it transmits outputs to other modules or to an external context.
- A connection is a transmission of data from a module to another or between a module and an external context.
- A modular graph is a data-flow graph composed of a set of several interconnected modules; it summarizes the sequential order of the computations.
In order to perform data assimilation, at each time step a modular graph is traversed by the forward procedure and then by the backward procedure.
2.1.1 The forward procedure
The input data set of a module \(F_p\) is a vector denoted \(x_p\) and its output data set is a vector denoted \(y_p\) (namely \(y_p = F_p(x_p)\)). As a consequence, a module \(F_p\) can be executed only if its input vector \(x_p\) has already been processed, which implies that all its predecessor modules have been executed beforehand. Thus there are only flow dependencies [12] between modules. Since the modular graph is acyclic, it is then possible to find a module ordering, i.e. a topological order, which allows us to correctly propagate the calculation through the graph. If we denote by \(x\) the vector corresponding to all the graph input data, provided by the external context, the forward procedure enables the calculation of the vector
\(^1\)An external context is an entity which initializes and retrieves the computation of certain modules.
y corresponding to all the graph output values. The modular graph defines an overall function $\Gamma$ and makes it possible to compute $y = \Gamma(x)$. The function $\Gamma$ has a physical meaning: it represents a direct numerical model $M$, with respect to YAO formalism. The forward procedure allows us to compute the outputs of the numerical model according to its inputs. The incoming connections from the external context are, for example, initializations or boundary conditions. Outgoing connections transmit their values to compute, as an example, a cost function.
2.1.2 The backward procedure
This procedure enables the computation of the adjoint of the cost function $J$ with respect to the control parameters. We suppose that for each module $F_p$, with an input vector $x_p$ and receiving in its output data points a “perturbation” vector $dy_p$, we can compute the matrix product $dx_p = F_p^T dy_p$, where $F_p^T$ is the transposed Jacobian matrix of the module $F_p$ calculated at point $x_p$. It is possible [5] to compute the gradient of $J$ with respect to the control parameters by traversing the modular graph in a reverse topological order and executing local computations on each module in order to compute $dx_p$. Given that an output of a module may transmit its data to multiple entries for other modules, it has been shown [4, 5] that this reversed traversal leads to a back propagation on the modular graph characterized by additions (i.e. accumulations) of several local computations. Each of these additions is computed in an intermediate step and then back propagated. Thus there are here flow and output dependencies [12].
2.1.3 YAO formalism
Running simulations or data assimilations using an operational numerical model $M$ requires the definition of a modular graph representing the sequence of the computations. A numerical model operates on a discrete grid, where the physical process is computed at each grid point $I$ and at each time step $t$. As the same phenomenon is under study at each grid point, only the modular subgraph representing a grid point is needed. YAO obtains $\Gamma$ by duplicating this subgraph for each $I$ and $t$.
If several scales (in space and times) are present in the numerical model, YAO allows to duplicate some subgraphs for different space and time schemes. In the following, only one space/time trajectory is considered to simplify the notations.
In YAO formalism, the user must define a set of basic functions \{ $F_1, F_2, \ldots, F_k$ \} which has to be applied to each grid point $I$ and at each time step $t$. The user also has to define the dependencies between these functions. From this information, YAO generates the overall modular graph $\Gamma$. The modules of the modular subgraph $\Gamma_{I,t}$ are denoted by $F_p(I,t)$, where $I$ represents a grid point (1D, 2D or 3D), $t$ is a time step and $F_p$ a basic function. Thus, a module is the computation of the function $F_p$ at grid point $I$ and at time $t$. We denote
by $i$ ($j$ and $k$) the indices of the first axis (respectively of the second and third axis). An edge from a source module $F_s(I', t')$ to a destination module $F_d(I, t)$ corresponds to a data transmission from $F_s(I', t')$ to $F_d(I, t)$ ($s$ may be equal to $d$).
The modular graph is similar to the Expanded Dependence Graph (EDG) used for the parallelism detection in nested loops [12]. The main difference with the modular graph is in the EDG the nodes represent one operation (the instance of a statement), whereas the nodes of the modular graph are a set of operations (the instance of a function composed by a set of statements) represented by the module $F_p(I, t)$. Thus, the granularity of the nodes differs. In practice the dimension of a YAO basic function depends on the application and on the user design. In general, a YAO module has dozens of statements but in particular cases it may be much larger.
### 2.2 User specifications and code generation
This section presents two YAO directives, *ctin* and *order*. YAO automatic code generation relies on these two directives, which are part of the YAO DSL and allow us to traverse the modules $F_p(I, t)$.
#### 2.2.1 *order* and *ctin* directives
The *ctin* directive has the following syntax: “*ctin from* $F_s$ *to* $F_d$ *list of coordinates*”. Such a directive represents one edge (or connection) of the modular graph, which is then automatically replicated by YAO in space and time. *list of coordinates* represents, for a generic point $I$ and time step $t$ of the destination module $F_d$, the point $I'$ and the time $t'$ of the source module $F_s$ (with $t \geq t'$).
If $S'$ and $S$ are the spaces associated to $F_s$ and $F_d$ respectively, we denote with $L'$ and $L$ the set of axes of $S'$ and $S$.
$^2$ YAO allows $S'$ to be a subspace of $S$ but not $S$ to be a subspace of $S'$, meaning that $L' \subset L$. We denote with $(I, t)$ the current position of $F_d$, with $(I', t')$ the relative position corresponding to $F_s$ and with $d$ the distance vector defined by $d = I' - I$, where $I$ is the projection of $I$ on the axes of $L'$. Thus, $d$ has the same dimension of $I$.
$^3$ We denote with $d_l \in \mathbb{Z}$ its component on the $l$ axis and with $d_t = t' - t$ ($\leq 0$) the delay between the time steps $t'$ and $t$. The user has to specify in the *list of coordinates* the distance vector and $d_t$ as a function of the generic point $I$ of the destination module, which is the same in all connections. Figure 1a gives an example of *ctin* directives.
Every *ctin* directive generates an edge from $F_s$ to $F_d$ labeled by the distance vector $d$ and $t' - t$. The resulting graph is a directed multigraph
$^2$ The iteration vector $I$ can be defined on one ($I = (i)$), two ($I = (i, j)$) or three dimensions ($I = (i, j, k)$) as a function of the space. Likewise for the vector $I'$ which is $I' = (i + d_i)$, $I' = (i + d_i, j + d_j)$ or $I' = (i + d_i, j + d_j, k + d_k)$ as a function of the space $S'$.
$^3$ The distance vector $d$ has a dimension which corresponds to the number of common components between $S$ and $S'$. As an example, if $S'$ is 2D and $S$ is 3D the distance vector is 2D and is equal to $(d_i, d_j)$.
$^4$ A directed multigraph is a graph with multiple parallel edges.
Figure 1: (a) Part of the DSL used by the user with 2D space modules. The second `ctin` directive specifies the connection from $F_1$ at point $(i,j+1,t)$ to $F_2$ at point $(i,j,t)$. (b) The `order` directives indicate the ordering in which we compute the $F_p$ functions and the ordering of the grid traversal.
represents all the dependencies between the basic functions. This multigraph corresponds to the Reduced Dependence Graph (RDG) [12] used for the automatic generation of parallelism in nested loops\(^5\). Figure 2 presents the RDG of the former example.
Since the space dimension is two, the edges are labeled by $(d_i,d_j,d_t)$ which indicates that the destination module at time $t$ and at point $(i,j)$ takes its inputs from the source module at time $t + d_t$ and point $(i + d_i, j + d_j)$ with $d_i, d_j \in \mathbb{Z}$ and $d_t \in \mathbb{Z}_{\leq 0}$.
The YAO `order` directive allows the user to define a traversal of the modular graph following a topological order. This directive allows us to visit all the grid points of the space, and enables the generation of the corresponding nested loops. The user specifies one `order` directive for each dimension of the space. Thus, a program generated by YAO contains an outermost loop representing the time. Within this loop the user defines, thanks to the `order` directives,\(^5\) as for the analogy between the EDG and the modular graph, the RDG has one statement per node while YAO RDG has a basic function (a set of statements) per node.
the different loops that allow the traversal of the space for each time step. In general, we have several ways to traverse a space. In the order directive, YAI (YAO Afterward axis 1) means that we are managing the i loop and we go along this axis in an ascendant way. YAI2 means the same but for the j axis, whereas YBI (YAO Backward axis 1) means that we go along the i axis in a descendant way. Fig. 1b gives an example of such order directives.
2.2.2 Generation of the forward and backward procedures
In Fig. 3 we give the translation, performed by the YAO code generator, of the ctin and order directives given in Figs. 1a and 1b. This represents the translation, in a pseudo code language, of the forward procedure. Each order directive generates one loop, one for each dimension of the space. The way we traverse the axes, ascendant or descendant, and the scheduling of the modules are both specified in the order directives. For each object of each F module C++ class the local forward function (a C++ method) is called using the output of its predecessor modules as inputs. For each basic function, the body of the local forward functions are defined by the user. It has to be noticed that all forward functions are thread-safe because they compute their results with respect to the generic grid point I, as shown in Fig. 3. The nested loops allow us to compute the output of the modules for all the grid points and for one time step. An overall loop, not shown in the figure, which allows us to traverse the time steps in an incremental order $t$, $t + 1$, $t + 2$ etc., encompasses all the local forward functions. The time loop may be considered as a computation barrier where at current time $t$, all the computations for time $t' < t$ are done.
As presented in section 2.1.2, the backward procedure traverses the modular graph in a reverse topological order. For ease of presentation we do not detail the pseudo code of the backward procedure since this one is very similar to the forward procedure one. However it is important to point out the addition (accumulation) in the back propagation, detailed in [4, 5] and specific to the backward procedure. This accumulation results in output dependencies which occur between two time steps. This computation is briefly explained in Fig. 4, which is a partial graph example. The $y_p$ variables ($p \in 1, 2, 3$) are the outputs of the local forward functions. The propagation allows YAO to provide the predecessor module computations to the successor modules. On the other hand, the back propagation allows us to back propagate the gradient of $J$ by using the Jacobian matrix ($J_p$ in the figure) for computing $dx_p$. The back propagation of several $dx_p$ ($dx_3$ and $dx_2$ in Fig. 4) which have the same predecessor enforces the addition of the $dx_p$ (the symbol $\sum$ in the figure).
3 Coherence in the computational ordering
The ctin and the order directives are the basis of YAO DSL. Sometimes the traversal defined using order directives can be difficult and in real-world YAO applications the user can make some mistakes in the definition of such an or-
Figure 4: Addition, represented by the symbol $\sum$, in the back propagation of the backward procedure. These two connections represent a data transfer between two time steps. The two modules $F_3$ and $F_1$ perform the transfer towards $F_1$ at time step $t - 1$. This partial graph example case is given by the YAO directives shown in Fig. 1.
dering. Defining a wrong traversal implies that when YAO schedules a module for computation its inputs are not ready because its predecessors have not been computed yet. These mistakes directly affect the numerical results of the data assimilation process.
The coherence of a ctin directive is defined as follows.
**Definition 1** Assume that $F_s(I', t') \rightarrow F_d(I, t)$ represents a ctin directive. This ctin is said to be coherent if, for each $(I, t)$, the order directives ensures that the basic function $F_s$ has already been computed at $(I', t')$. The connection is said to be incoherent otherwise.
In this section we present the rules which allow us to test the coherence of a ctin directive. The case where two basic functions $F_s$ and $F_d$ are computed at the same time step ($t = t'$) from two different outermost loops (i.e. from two different nests of order directives), with respect to the $L'$ axes, represents the most simple case:
**Rule 1** Assume that $F_s(I', t') \rightarrow F_d(I, t)$, with $t' = t$, is a connection between the basic functions $F_s$ and $F_d$. We suppose that $F_s$ and $F_d$ belong to two different outermost loops. If the outermost loop containing $F_s$ is written before the outermost loop containing $F_d$, then the ctin directive is coherent otherwise the ctin directive is incoherent.
The coherence verification is more difficult as soon as the two basic functions $F_s$ and $F_d$ are in the same outermost loop. Given a set of order directives, we introduce in section 3.1 two rules to determine the coherence of a ctin directive. Then, in section 3.2 we present a general verification algorithm.
3.1 Verification rules
**Rule 2** Assume that $F_s(I', t') \rightarrow F_d(I, t)$ is a connection between two basic functions contained in an outermost loop $l \in L' \cup \{t\}$, with distance $d_l \neq 0$.
If $d_l < 0$ ($d_l > 0$) and the loop $l$ is ascendant (respectively descendant), then this connection is coherent. In the same way, if $d_l < 0$ (respectively $d_l > 0$) and the loop $l$ is descendant (respectively ascendant), then this connection is incoherent.
**Justification** Suppose the loop $l$ is ascendant. Consider a point $P = (I, t)$ with $I \in S$ and suppose $l_P$ its component relative to the $l$ loop. Suppose also $l_{P'}$ the component relative to the $l$ loop for the point $P' = (I', t')$ with $I' \in S'$. If $l$ is the outermost loop then, at the moment of the computation of $P$, all the points with $l_{P'} < l_P$ have already been computed due to the ascendant direction of the loop. In fact, when the nested loops compute the iteration $l_P$, all the instructions which correspond to an iteration vector $P'$ with a component $l_{P'}$ lower than $l_P$ have already been computed by the loop $l$. If $d_l < 0$, then the iteration vector $P'$ has $l_{P'} = l_P + d_l < l_P$ which demonstrates the coherence of the connection. As a consequence if $d_l > 0$, then module $F_s(P')$ has not been computed yet and the connection is incoherent. The case of a descendant loop is similarly justified.
**Remark 1** Given that the outermost loop concerns the temporal trajectory, rule 2 points out that if $d_l = t' - t < 0$ then the $ctin$ directive is coherent for any set of order directives. The verification process must start by testing the coherence with respect to this outermost loop (time). As in YAO the time loop is always ascendant and the delays $d_l \leq 0$ if a $ctin$ directive has $d_l \neq 0$ then the coherence condition always holds. For ease of presentation if we refer to $F_s(I)$ we suppose that the time step is $t$.
Figure 5a illustrates a traversal on a 2D space, where basic functions $A$ and $B$ are defined. The point $(i, j)$, circled in the figure, refers to the current computation point. The grid points computed in the previous iterations are colored in grey. The arrows are all coherent connections with respect to this specific nested order directives. These are the connections $F_s(i-1, j+1) \rightarrow F_d(I)$, $F_s(i−1, j) \rightarrow F_d(I)$, $F_s(i-1, j-1) \rightarrow F_d(I)$. The three elements which ensure a coherent computation are the outermost loop (the $i$ axis), the direction YA1 (ascendant) and the sign (-) of $d_l$, as shown by rule 2. Note that $F_s$ and $F_d$ may be indistinctly $A$ or $B$.
Figure 5b illustrates another example of a 2D traversal with a descendant outermost loop $j$; the arrows are all incoherent connections.
**Rule 3** Assume that $F_s(I', t') \rightarrow F_d(I, t)$ is a connection between two basic functions contained in an outermost loop $l \in L' \cup \{t\}$, with distance $d_l = 0$. In order to test the coherence we have to remove the outermost loop and keep the
---
6$L'$ is the set of axes of the space $S'$.
7$A(i−1, j+1) \rightarrow A(I)$, $B(i−1, j) \rightarrow A(I)$ and $B(i−1, j−1) \rightarrow B(I)$ are also coherent connections.
Figure 5: Traversal given by two nested order directives on a $5 \times 5$ space. Grid point $I = (i,j) = (3,3)$ is the current iteration point. The grey and white squares represent the computed and not yet computed grid points respectively. The arrows represent coherent connections (a) and incoherent connections (b).
rest of its instructions (loops and basic functions). We may have two cases for the remaining instructions:
- $F_s$ and $F_d$ are in the same embedded loop: we apply rule 2 or rule 3 recursively.
- $F_s$ and $F_d$ are in two different instructions: we apply rule 1.
Justification In case $d_l = 0$, the basic functions $F_s$ and $F_d$ are computed in the same iteration of the loop computing the $l$ axis. This loop computes one or several instructions which represent the computation of either a basic function or a loop (nested in the former $l$ loop). Thus, if the basic functions $F_s$ and $F_d$ are computed with the same loop nested in $l$, then we must verify the coherence with respect to this inner loop, for this reason we must apply either rule 2 or 3. However, if the basic functions $F_s$ and $F_d$ are computed by two different instructions in the $l$ loop, rule 1 is applied, i.e. the instruction containing $F_s$ must be computed before the one containing $F_d$.
Example 1 Test the coherence of connection $A(i, j + d_j, t) \rightarrow B(I, t)$, with $d_j \in \{-1, 0, +1\}$, given the order directives on the left side:
We apply rule 3. After removing the outermost loop on $t$ and then the one on $i$
Figure 6: Example of order directives defined by the user. The basic function $E$ is applied to a three dimension space, whereas $B, C, D$ and $A$ are applied to two and one dimension spaces respectively.
Figure 7: Tree representation of the order directives of Fig. 6. Leaf nodes represent basic functions and loops respectively. They are characterized by $YXl$ ($X \in \{A, B\}$, $l \in \{1, 2, 3\}$) and child number.
we obtain the directives on the right side. Since $A$ and $B$ are contained in two different outermost loops and since $A$ precedes $B$, the connection is coherent (rule 1).
3.2 Coherence algorithm
The overall verification process is given by testing the coherence of each $ctin$ directive. The algorithm parses each connection independently. Taking into account remark 1, we know that every $ctin$ directive which verifies $d_i \neq 0$ is coherent. Thus we limit the coherence process to the verification of the $ctin$ directives which verify $d_i = 0$. To introduce the coherence algorithm we present in Fig. 6 an example of order directives.
In this example the user specifies two nested order directives. The forward procedure starts with the computation of the nested orders containing $A, B$ and $C$; then the second nested orders, which contain the basic functions $D$ and $E$, are computed. Each nest of order directives is composed of an outermost loop, described by the parameter $YXl$, with $X \in \{A, B\}$ and $l \in \{1, 2, 3\}$ standing for $\{i, j, k\}$. The body of an outermost loop is composed of three types of instruction lists: (i) a list of loops; (ii) a list of basic functions; (iii) a list composed of both loops and basic functions.
As in the compiler theory [19], it is possible to organize the order directives
by an Abstract Syntax Tree (AST). Figure 7 shows the tree corresponding to
the example of Fig. 6. The root children are the outermost order directives (two
in the example); these nodes correspond to level 1 (the root being at level 0). In
general, each node of the tree corresponds to one instruction. This instruction
may be either a loop, corresponding to an order directive, or the computation
of a basic function. A node which computes a basic function has no children
and is represented by a leaf of the tree. A node which corresponds to a loop
has as many children as the number of instructions contained in its loop. These
children are placed in the successive level with respect to the level of the parent
(parent level plus one).
Each internal node (which is not a leaf or the root) represents a loop defined
by its axis and its direction. The children of a node are numbered in the ordering
of the user declaration. We denote this number as child_number. An internal
node of the tree contains also the parameter YX1, which specifies the loop axis
and the direction. A leaf contains the basic function name.
With this representation if we want to characterize the nested loops which
enclose the calculation of a basic function, we only have to determine the path
from the root to the leaf which represents the basic function. The internal
nodes of this path represent the nested loops which allow the computation of
the basic function. If we do not consider the root, the first node of the path
corresponds to the outermost loop and the last node corresponds to the basic
function. Thus, for each basic function, we can create a list which represents
the path with at most three intermediate internal nodes. Each internal node
contains the following fields: (i) child_number, ordering from left to right of the
children of a parent node; (ii) axis, loop index (axis ∈ \{i, j, k\}); (iii) direction,
ascendant or descendant of the loop. The axis and the direction are represented
in Fig. 7 by the parameter YX1. Thanks to the rules introduced in the previous
sections and to the tree structure, we can verify the coherence of a particular
ctin directive using Algorithm 1. We explain the general idea of this algorithm
through some examples.
Example 2 In Fig. 7, we consider a connection B(i − 1, j) → C(i, j), where
d_i = −1 and d_j = 0, and we check its coherence. Figure 8a shows the paths P_b
and P_c for the basic functions B and C: they have three levels (n = 3). At the
first iteration m = 1, the child_number, the axis and the direction of nodes N_b
and N_c are 1 and YA1. The conditions of lines 8 and 11 are false, we are in the
same loop. Since direction = ascendant and d_i < 0, rule 2 of line 16 gives that
the ctin is coherent, the algorithm ends returning true.
Example 3 We now consider a connection A(i) → B(i, j), and we test its
coherence. This is the case of data transfers between computational spaces
which have a relation of projection: A and B are basic functions applied to 1D
and 2D spaces respectively. P_s and P_d have two and three levels respectively
(Fig. 8b). At iteration m = 1, the basic functions are in the same loop (the
conditions of lines 8 and 11 are false) and, since d_i = 0, m is incremented. At
iteration m = 2, the condition at line 8 is true, because we have on one hand
ALGORITHM 1: Coherence verification of a given ctin directive with respect to the order directives.
Require: Denote by \( F_s(I', t') \rightarrow F_d(I, t) \) the connection which represents the ctin directive. \( d_l \) is the distance of the vector \( d = I' - I \) with respect to the \( l \) axis.
Ensure: true if the ctin is coherent, false otherwise.
1. if \( d_l < 0 \) then
2. return true
3. end if
4. Find two paths \( P_s \) and \( P_d \) from the root to the leafs \( F_s \) and \( F_d \).
5. Let \( n \) be the minimum length of \( P_s \) and \( P_d \).
6. for \( m = 1 \) to \( n \) do
7. Determine at the level \( m \) the two nodes \( N_s \) and \( N_d \) of the tree which are located on the two paths \( P_s \) and \( P_d \) respectively. \{\( N_s \) and \( N_d \) are either the same or two siblings.\}
8. if child_number of \( N_s < \) child_number of \( N_d \) then
9. return true
10. end if
11. if child_number of \( N_s > \) child_number of \( N_d \) then
12. return false
13. end if\{At this point the two nodes are identical.\}
14. Assume that \( l \) is the axis corresponding to the common loop.
15. if \( d_l \neq 0 \) then
16. return the result of rule 2.
17. end if
18. \( m \leftarrow m + 1 \) \{Continue to the successive level, i.e. apply rule 3.\}
19. end for
20. return false
the node of the basic function \( A \), and on the other hand the \( j \) loop containing \( B \). We test rule 1, i.e. the precedence of \( A \) with respect to \( B \), which is given by child_number of \( N_s \) and \( N_d \). The algorithm returns true.
3.3 Results of the coherence algorithm
The coherence algorithm solves the problem of verifying that a given ctin directive is correctly computed using the user-defined graph traversal. This algorithm is applied to each user-defined connection. If all values returned are true we can ensure that ctin and order directives are written coherently. The coherence algorithm has been implemented and tested on both fictitious and actual YAO applications. It plays an important role during the development of a YAO application with hundreds of ctin and order directives, enforcing the robustness of the variational data assimilation process. The tests on actual applications have led to the detection of a couple of real incoherences which had never been de-
tected by human observations. The detection of these incoherences has led to an improvement in the precision of the numerical results of these YAO applications.
The next section deals with the automatic parallelization of the forward and backward procedures based on coherent directives. The coherence algorithm itself is excluded by the parallelization process. Indeed, it does not represent a performance bottleneck, being able to analyze at generation-time thousands of directives in a matter of milliseconds.
4 Algorithm for automatic parallelization
4.1 Parallelization of the forward procedure
In section 2 we have noted an interesting similarity between YAO formalism and the theories of compilation and of automatic parallelization of nested loops [12]. Thanks to this similarity we can adapt these techniques and algorithms to YAO automatic code generator. We thus propose here to integrate and adapt such algorithms in order to automatically parallelize the forward procedure generated by YAO on parallel shared memory architectures with multi-thread programming. No communication is required and we only have to maximize the number of parallel loops. Because of the strong time dependencies in all data assimilation applications the temporal loop is not parallelized and we focus on data parallelism at each time step. The domain decomposition between threads is performed as a 1D block distribution on the space and we rely on a static load balancing since in all the current YAO applications the computation load of each module is constant at each grid point. Our goal is thus to label, as “parallel” or “not parallel”, each outermost order directive so that the corresponding loop can be generated as parallel or sequential in the final code (thanks to OpenMP...
Figure 9: RDG obtained by simplification of the RDG of Fig. 2. The edges are numbered from 1 to 5.
Figure 10: Flow dependencies between three threads T₁, T₂ and T₃. Same edge numbers as in Fig. 9.
directives). In order to maintain the coherence hypothesis of one given nest of order directives, we opted not to change or invert the order defined by the user. However we can still use techniques such as loop distributions possibly followed by loop fusions in order to detect the maximum available parallelism and to reduce the synchronization points.
Since the temporal loop is not considered in the parallelization algorithm the edges whose $t' - t$ are negative can be removed from the RDG. The remaining graph is shown in Fig. 9: we denote it $\text{RDG}$. This is obtained by removing all $dt = -1$ connections and by writing only the signs of the distance vector components. Thus $(0, +)$ means a distance vector equal to $(0, +1)$.
Considering some nested order directives which have as outermost axis $l$ and a connection from $F_s$ to $F_d$, we consider a connection as critical with respect to these nested order directives if:
- $F_s$ and $F_d$ are contained by the nested directives,
- $d_l = 0$ and $d_l \neq 0$.
The analysis of the $\text{RDG}$ highlights the critical connections which prevents parallelization because of flow dependencies between threads, as presented in Fig. 10. The connections from $F_1$ to $F_2$ and from $F_3$ to $F_2$ (edges #2 and #4 in Figs. 9 and 10) result in two flow dependencies between the couples of threads $(T_1, T_2)$ and $(T_2, T_3)$ because $d_l \neq 0$ (in this example $l$ is the $i$ axis). This is not the case for the connections #1, 3 and 5 because the two corresponding grid points belong to the domain computed by one thread, as shown in Fig. 10. The connection #2 is not critical, since $F_3$ and $F_2$ are not in the same nested loops (see Figs. 1b and 3). Therefore, in this example only the connection #4 is critical.
For the analysis of one outermost loop $l$ composed of functions $F_1, F_2, \ldots, F_r$ we consider the subgraph $G_l$ of the $\text{RDG}$ limited to the $r$ basic functions and to the edges between them. On the edges of this subgraph we retain only
information concerning the distance $d_l$.
As far as the $d_l$ value is concerned we retain only the sign of $d_l$, $(-, +)$ if $d_l \neq 0$ and 0 if $d_l = 0$. The analysis of $G_l$ allows us to decompose the loop into several loops preserving the computation coherence hypothesis. Taking into account that the forward functions are thread-safe, we can apply the Allen-Kennedy algorithm [14] to decompose the loop into parallel loops as follows.
- Calculate the Strongly Connected Components (SCCs) of $G_l$.
- Consider the reduced Directed Acyclic Graph (DAG), denoted by $G_l/SCC$, by shrinking each SCC down to a single vertex and by drawing one, and only one, edge between two SCCs if there is at least one edge from the first to the second in the graph $G_l$. If at least one of the edge in $G_l$ which connects these two SCCs is labeled by non 0 (that is to say either $-$ or $+$), then label the corresponding edge in $G_l/SCC$ by this value. Otherwise, if all the labels are 0, then label the corresponding edge in $G_l/SCC$ by 0.
- Sort in a topological order the $G_l/SCC$ graph and enumerate all the SCCs following this order. For each SCC generate an $l$ loop which computes its basic functions.
This decomposition is a maximum loop distribution of the initial loop; in other words, we can not further decompose without breaking the coherence hypothesis.
We can analyze each SCC loop in order to see if we can perform a domain decomposition on the $l$ axis. For a particular SCC we consider all the edges of the graph $G_l$ between two basic functions being part of this SCC. If at least one of these connections is labeled by $+$ or $-$, namely if $d_l \neq 0$, the SCC is considered to be not parallelizable. The loop is parallelizable if all these edges are labeled by 0. In other words it is parallelizable if it does not contain any flow dependency between threads. We label by $p$ and $\bar{p}$ the loops which are parallelizable and not parallelizable respectively. Such maximum loop distribution gives the largest number of parallel loops. The critical connections of the RDG have been minimized. An example of this algorithm is presented in section 4.4.
4.2 Reducing synchronization points
In the previous section, we have applied the Allen-Kennedy algorithm to YAO thanks to the analogies between the EDG and the modular graph. This algorithm enables us to automatically label as parallel or not the SCCs resulting in a maximum loop distribution. With regards to performance, this loop distribution is not the best solution because it increases the number of synchronization points. Following Kennedy-McKinley [13] it is possible to propose a loop fusion algorithm that will reduce the number of synchronization points. As the $G_l/SCC$ is a DAG, we can reorganize the SCCs in levels. The levels are numbered from
---
8All the distances $d_l$ in $G_l$ are either $\leq 0$ if the $l$ loop is ascendant, or $\geq 0$ if the $l$ loop is descendant, as they correspond to the same outermost loop.
ALGORITHM 2: Fusion with levels approach.
1: Organize the graph $G_{1/SCC}$ in $M_{level}$ levels. The vertices are labeled by either $p$ or $\bar{p}$.
2: Traverse the graph and for each level merge the vertices of the same label.
Update edges and their labels ($0$, $-$ or $+$).
3: $k := 1$
4: while $k < M_{level}$ do
5: Consider two consecutive levels $k$ and $k + 1$:
6: if there are two vertices labeled by $p$ and there is no critical edge between
them then
7: Fusion the two in one vertex labeled by $p$
8: else
9: if there are two vertices labeled by $\bar{p}$ then
10: Fusion the two in one vertex labeled by $\bar{p}$
11: end if
12: end if
13: if a fusion has been performed then
14: Reorganize the new reduced graph in levels and update $M_{level}$
15: else
16: $k := k + 1$
17: end if
18: end while
$k = 1$ to $k = M_{level}$, where $M_{level}$ is the maximum number of levels. The first level, $k = 1$, contains the SCCs without predecessors: the predecessors of a SCC at level $k$, with $k > 1$, are located in the preceding levels $k'$ ($k' \leq k - 1$), with at least one predecessor located at level $k - 1$. Because of the level reorganization there is no edge between two vertices at the same level. For each level it is then possible to merge all vertices labeled as $p$ and separately all vertices labeled as $\bar{p}$. We obtain a reduced graph with the same number of levels but with one or two vertices per level. If a level contains two vertices they are mandatory labeled as $p$ and $\bar{p}$.
The fusion process can be extended to the vertices located at two consecutive levels as follows: for all levels $k$ and $k + 1$,
- merge two vertices labeled as $\bar{p}$: this gives a new $\bar{p}$ vertex;
- merge two vertices labeled as $p$ which are not connected by a critical edge ($d_l = 0$): this gives a new $p$ vertex.
The fusion process between different levels may modify the vertex level repartition. However the modification can affect only some levels: it does not impact the levels which precede $k$. Algorithm 2 allows us to manage the fusion of the vertices with the level technique which maintains the highest degree of parallelization. The final reduced graph is treated by YAO, which generates code according to the following steps.
• Sort in a topological order the final reduced graph and enumerate all its vertices following this topological order.
• Write one nest order directives for each vertex. These order directives have the same axes as the one provided by the user and contain the basic functions merged in the vertex.
• Generate OpenMP directives for vertices labeled by $p$.
An example of this algorithm is presented in section 4.4.
4.3 Parallelization of the backward procedure
The same parallelization algorithm can also be applied to the backward procedure, which results in a complete parallelization of all computations at each time step. The total elapsed time in a YAO application is mainly composed of the forward and the backward elapsed times. Parallelizing these two procedures means that most of the application has been optimized. Profiling measurements on some YAO applications showed that 99 percent of the total elapsed time is generally in these procedures.
The RDG used for the backward procedure is the same as for the forward procedure but the arrows are reversed with respect to the original RDG. These two RDGs have the same SCCs. As the outermost loops also have the same axis, the same method used to parallelize the forward procedure is also valid to parallelize the backward procedure. Likewise, it is easy to see that the rules used to merge loop blocks previously introduced remain valid for the backward procedure. Thus, parallel order directives obtained by the decomposition/merging
methods defined for the forward procedure can be fully retained for the backward procedure.
However the parallelization of the backward procedure has a further difficulty in terms of thread synchronization. This synchronization is required by the addition (accumulation) presented in section 2.2.2. As shown in Fig. 4, in a parallel context this addition may result in a data race condition (write/write conflicts) if the back propagations of $dx_p$ are performed concurrently by several threads. Such conflicts may occur between two time steps. Hence, the analysis of the RDG is not sufficient to detect all the data race conditions of the backward procedure.
This issue can be solved with OpenMP atomic directives which ensure that each addition is performed atomically. However these atomic instructions are costly, as well as numerous in the backward parallel code, which prevents us from obtaining good parallel performances in practice. In order to avoid these OpenMP atomic directives, we rely on the distance vectors of the RDG to determine the maximum $|d_l|$, denoted $d_{l_{\text{max}}}$.
In the 1D block decomposition, we can now further decompose each thread domain into three subdomains: two border subdomains with $d_{l_{\text{max}}}$ grid points in the parallel dimension, and one inner subdomain with usually much more than $d_{l_{\text{max}}}$ grid points in the parallel dimension. An example with $d_{l_{\text{max}}} = 1$ is presented in Fig. 11. Data race conditions are now avoided by ensuring that all threads compute the three subdomains in the same ordering. OpenMP barrier directives are required between each subdomain computation.
Taking all this into account, the overall parallelization algorithm ensures the parallelization of all the computations done by a YAO generated application. It gives a domain decomposition with respect to the outermost loop $l$, which can then be automatically parallelized in the final generated code thanks to OpenMP directives. Furthermore if a multi-level parallelization is desired, it is then possible to apply the same algorithm for each subloop. We emphasize that the parallel code generated by YAO respects the order and the ctin directives, which implies that the result of the parallel code is the same as the sequential code.
### 4.4 Marine acoustics example
This section presents an example of the decomposition algorithm on a 2D modular graph taken from an actual YAO application. The Marine acoustics example has a small number of functions $F_i$, which allows us to easily show the evolution of the RDG graph. We use the same function names as [7, 6]. This YAO application deals with marine acoustics and allows us to assimilate actual observations of acoustic pressure in order to retrieve some geoacoustic parameters like celerity, density and attenuation. In [7] the basic functions are denoted by $n(z), C, B, \text{bet}, \text{gam}, R, X_t, \psi$ and $\psi fd$. To make it simpler we denote them by $F_1, \ldots, F_9$ respectively. Figure 12 shows the RDG composed of $r=9$ basic functions and the edges labeled with the coefficient signs of the ctin directives. In this figure the SCCs are outlined by the dashed lines and numbered from 1 to 6.
The order directives specified by the user are given in Fig. 13. In this case, the outermost loop is related to the ascendant $i$ axis. After computing the $G_{i/SCC}$ graph, we label each vertex and we proceed with the level reorganization, as presented in Fig. 14, where $M_{level}$ equals 5; the single circle denotes a parallelizable vertex ($p$) and the double circle a non parallelizable vertex ($\bar{p}$). Fig. 16 shows the fusion of the vertices 2 and 3 labeled by $p$ in a new vertex called 2,3 of the same label. This is done in the initialization phase of the algorithm (line 2). Then the vertices 1 and 2,3 can be merged in a new vertex called 1,2,3 which is parallel too. The two vertices are located on levels $k = 1$ and $k = 2$. A level reorganization reduces $M_{level}$ to 4. The same operation is done on the vertices 1,2,3 and 4, followed again by a level reorganization ($M_{level}$ reduced to 3). The topological order is then: $\{1,2,3,4\}$, $\{5\}$, $\{6\}$ as shown in Fig. 17. The final scheduling respects the ordering given by the user and corresponds to: $[F_1 F_3 F_2 F_5 F_4 F_6]$, $[F_5 F_7 F_6]$, $[F_9]$ or $[n(z) B C \text{ gam bet}]$, $[R X_1 \psi]$, $[\psi f d]$. The final decomposition of the order directives is given in Fig. 15. With the keywords parallel and non parallel this figure outlines the outermost loops (order directives) that the algorithm has recognised as parallel or not.
5 Performance results
We present the performance results of the parallel code generated by YAO (changelist 613 [11]) for both simple and complex actual applications of data assimilation. Experiments are performed on a server at Polytech Paris-UPMC (France), composed of one AMD Magny-Cours Opteron 6168 processor and 16 GB of memory. This processor has 12 cores running at 1.9 GHz which have private L1/L2 (64KB/512KB) caches and share two 6MB L3 caches. All computations are performed in double precision.
Figure 14: $G_{s/SCC}$ where the double circle represents a non parallelizable vertex.
Figure 15: order directives recomputed by the algorithm for the Marine acoustics example.
Figure 16: Fusion of the vertices 2 and 3 in a new $p$ vertex called 2,3.
Figure 17: Fusion of the vertices 1,2,3 and 4 in a new $p$ vertex called 1,2,3,4.
5.1 Simple data assimilation applications
We focus here on two simple, but actual, data assimilation applications: the Shallow-water and the Marine acoustics applications mentioned before.
The RDG of the Shallow-water application is composed of 6 SCC (each SCC contains one basic function), see [4, 5] for more details. The parallelization algorithm returns that all SCCs are parallelizable. Figure 18 shows the elapsed times and the parallel speedups for an increasing number of cores used (with one OpenMP thread per core) and for different computational space sizes, with both OpenMP atomic directives and our subdomain decomposition. The data race conditions in the backward procedure are more efficiently avoided with our subdomain decomposition which clearly offers better performance than the atomic directives. We emphasize that such OpenMP code automatically generated by YAO is equivalent to a (non-trivial) manual parallelization, and offers good speedups (up to 9.4 on 12 cores). Moreover for a fixed number of cores the speedup increases with the computational space size since this increases the computation grain of each thread.
The performance results on the Marine acoustics are very different. In section 4.4 we have shown that the parallelization algorithm does not parallelize the whole RDG. Three modules, which unfortunately contain most of the computation, are excluded from the parallel region. Figure 19 shows the elapsed times and the parallel speedups, as well as the theoretical maximum speedup according to Amdahl’s law for this application. The parallel speedup is very limited, but the code generated by YAO offers most of the speedup available in this application. Again, the performance gain increases with the computational space size.
5.2 A complex data assimilation application
We now focus on the much more complex NEMO application, which requires a greater number of modules. NEMO [10] is a state-of-the-art complete three-dimensional ocean modeling framework based on the finite difference approximation of Navier-Stokes equations. NEMO is used by a large community: 240 projects in 27 countries (14 in Europe, 13 elsewhere) and its evolution and reliability is controlled by an European consortium⁹. The GYRE configuration of NEMO is considered in this work. In this configuration, the dimension of the computational space is fixed at $32 \times 22 \times 31$ for each time step. The YAO implementation of this numerical model involves 82 modules that are computed within 11 nested loops. Among these 11 loops, 2 loops (containing 1 module each) are excluded from the parallel region and represent 2.1% of the serial execution time. 80 out of the 82 modules are thus parallelized by YAO. Due to the limited dimensions of the computational space of the GYRE configuration,
⁹http://www.nemo-ocean.eu/
parallel performance tests were performed only up to 8 cores. We use here our subdomain decomposition in order to obtain the best parallel speedups.
Figure 20 shows the elapsed times and the parallel speedups, as well as the theoretical maximum speedup according to Amdahl’s law for this NEMO application, as generated by YAO with OpenMP. Thanks to YAO, we automatically obtain good parallel speedups: up to 5.71 on 8 cores. According to Amdahl’s law, this represents 81.8\% of the maximum theoretical speedup (namely 6.98) available on 8 cores for this complex and actual data assimilation application.
6 Conclusion and perspectives
In this paper we have shown how the modular graph formalism of YAO allows us to address some important automatic generation tasks. During the development of a new YAO application the writing of the order directives is a costly phase. The coherence algorithm allows the user to speed up this development. We have highlighted some rules which may in the future open the way to a completely automatic generation of the order directives. Moreover, the user-defined order directives are important from a performance point of view. The automatic generation of order directives may allow nested loops which minimize the computation time by exploiting at best the CPU memory hierarchy. This is a subject under study.
We have also shown how the modular graph allows us to address the issue of the automatic parallelization of the code generated by YAO. Indeed, a YAO modular graph is generated by a reduced graph, which is similar to the Reduced Dependence Graph (RDG) used in the automatic parallelization of nested loops. This similarity allows the adaptation to YAO of the algorithms that were developed in this research field. We have thus presented here how the Allen-Kennedy [14] and Kennedy-McKinley [13] algorithms can be integrated and adapted in
order to enable the automatic parallelization, via multiple threads on parallel shared memory architectures, of the application code generated by YAO. In the backward procedure the modular graph is furthermore used to decompose each thread domain into three subdomains, whose appropriate sizes enable us to completely avoid the race conditions occurring in this backward procedure. We have also presented performance results of the parallel generated code with OpenMP on a multicore CPU for both simple (Shallow-water, Marine acoustics) and complex (NEMO) actual applications. We automatically obtain good speedups for these applications with up to around 80% of parallel efficiency on 8 or 12 CPU cores, within the limits of the parallelism available in each application.
More advanced transformations (unimodular transformation, loop inversion, SIMD vectorization, tiling, . . . ) have already been developed in the context of automatic loop parallelization, especially via the polyhedral model [12, 17]. We are currently studying if and how this polyhedral model can be integrated in the YAO framework. In the future, we also plan to investigate the automatic generation of MPI code from OpenMP code in the YAO context in order to automatically scale data assimilation applications on distributed memory architectures. It can be noticed that the subdomain decomposition between border and inner subdomains, presented here to avoid race conditions, may help overlap MPI communications with computation in order to obtain the best speedups in a distributed memory context: here again, the modular graph of YAO may be very useful to automatically determine this subdomain decomposition for any variational data assimilation application. Finally, these automatically inserted OpenMP directives could also be rewritten as OpenACC\textsuperscript{10} directives in order to automatically generate parallel code for GPUs (Graphics Processing Units).
\textsuperscript{10}Open industry standard of compiler directives for accelerators, see: http://www.openacc-standard.org/
References
|
{"Source-Url": "https://core.ac.uk/download/pdf/52720619.pdf", "len_cl100k_base": 14391, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 85399, "total-output-tokens": 16990, "length": "2e13", "weborganizer": {"__label__adult": 0.0003921985626220703, "__label__art_design": 0.0005230903625488281, "__label__crime_law": 0.0005292892456054688, "__label__education_jobs": 0.0008563995361328125, "__label__entertainment": 0.00012290477752685547, "__label__fashion_beauty": 0.0002512931823730469, "__label__finance_business": 0.0003991127014160156, "__label__food_dining": 0.0004892349243164062, "__label__games": 0.0008616447448730469, "__label__hardware": 0.0021877288818359375, "__label__health": 0.0007500648498535156, "__label__history": 0.0006542205810546875, "__label__home_hobbies": 0.00018155574798583984, "__label__industrial": 0.001255035400390625, "__label__literature": 0.0002930164337158203, "__label__politics": 0.0006227493286132812, "__label__religion": 0.0008258819580078125, "__label__science_tech": 0.320068359375, "__label__social_life": 0.00012755393981933594, "__label__software": 0.01241302490234375, "__label__software_dev": 0.654296875, "__label__sports_fitness": 0.0005035400390625, "__label__transportation": 0.0011701583862304688, "__label__travel": 0.0003085136413574219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64486, 0.03579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64486, 0.54292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64486, 0.87359]], "google_gemma-3-12b-it_contains_pii": [[0, 1027, false], [1027, 2858, null], [2858, 6353, null], [6353, 9828, null], [9828, 12426, null], [12426, 15437, null], [15437, 18748, null], [18748, 20262, null], [20262, 23394, null], [23394, 25401, null], [25401, 28661, null], [28661, 30205, null], [30205, 31972, null], [31972, 35301, null], [35301, 37627, null], [37627, 39408, null], [39408, 41647, null], [41647, 44670, null], [44670, 46980, null], [46980, 48481, null], [48481, 51713, null], [51713, 53651, null], [53651, 55134, null], [55134, 56832, null], [56832, 58717, null], [58717, 60787, null], [60787, 63033, null], [63033, 64486, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1027, true], [1027, 2858, null], [2858, 6353, null], [6353, 9828, null], [9828, 12426, null], [12426, 15437, null], [15437, 18748, null], [18748, 20262, null], [20262, 23394, null], [23394, 25401, null], [25401, 28661, null], [28661, 30205, null], [30205, 31972, null], [31972, 35301, null], [35301, 37627, null], [37627, 39408, null], [39408, 41647, null], [41647, 44670, null], [44670, 46980, null], [46980, 48481, null], [48481, 51713, null], [51713, 53651, null], [53651, 55134, null], [55134, 56832, null], [56832, 58717, null], [58717, 60787, null], [60787, 63033, null], [63033, 64486, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64486, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64486, null]], "pdf_page_numbers": [[0, 1027, 1], [1027, 2858, 2], [2858, 6353, 3], [6353, 9828, 4], [9828, 12426, 5], [12426, 15437, 6], [15437, 18748, 7], [18748, 20262, 8], [20262, 23394, 9], [23394, 25401, 10], [25401, 28661, 11], [28661, 30205, 12], [30205, 31972, 13], [31972, 35301, 14], [35301, 37627, 15], [37627, 39408, 16], [39408, 41647, 17], [41647, 44670, 18], [44670, 46980, 19], [46980, 48481, 20], [48481, 51713, 21], [51713, 53651, 22], [53651, 55134, 23], [55134, 56832, 24], [56832, 58717, 25], [58717, 60787, 26], [60787, 63033, 27], [63033, 64486, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64486, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
1a2ab2b7dfff262490e5906cf138b27077949f1b
|
Primers or Reminders?
The Effects of Existing Review Comments on Code Review
Davide Spadini
d.spadini@sig.eu
Software Improvement Group &
Delft University of Technology
Amsterdam & Delft, The Netherlands
Gül Çalikli
gul.calkili@gu.se
Chalmers & University of Gothenburg
Gothenburg, Sweden
Alberto Bacchelli
bacchelli@ifi.uzh.ch
University of Zurich
Zurich, Switzerland
1 INTRODUCTION
Peer code review is a well-established practice that aims at maintaining and promoting source code quality, as well as sustaining development teams by means of improved knowledge transfer, awareness, and solutions to problems [3, 5, 27, 41].
In the code review type that is most common nowadays [7], the author of a code change sends the change for review to peer developers (also known as reviewers), before the change can be integrated in production. Previous research on popular open-source software projects has found that three to five reviewers are involved in each review [44]. Using a software review tool, the reviewers and the author conduct an asynchronous online discussion to collectively judge whether the proposed code change is of sufficiently high quality and adheres to the guidelines of the project. In widespread code review tools, reviewers’ comments are immediately visible as they are written by their authors; could this visibility bias the other reviewers’ judgment?
If we consider the peer review setting for scientific articles, reviewers normally judge (at least initially) the merit of the submitted work independently from each other. The rationale behind such preference is to mitigate group members’ influences on each other that might lead to errors in the individual judgments [34]. It is reasonable to think that also in code review, the visibility of existing review comments made by other developers may affect one’s individual judgment, leading to an erroneous judgment.
An existing comment may prime new reviewers on a specific type of bug, due to the availability bias [30]. Availability bias is the tendency to be influenced by information that can be easily retrieved from memory (i.e., easy to recall) [21]. This bias is one of the many cognitive biases identified in psychology, sociology, and management research [30]. Cognitive biases are systematic deviations from optimal reasoning [30, 47, 48]. In the cognitive psychology literature, Kahneman and Tversky showed that humans are prone to availability bias [51]. For example one may avoid traveling by plane after having seen recent plane accidents on the news, or may see conspiracies everywhere as a result of watching too many spy movies [21]. Therefore, it seems fitting to imagine that a reviewer may be biased toward a certain bug type, by readily seeing another reviewer’s comment on such a bug type. This bias would likely result in a distorted code review outcome.
In this paper, we present a controlled experiment we devised and conducted to test the current code review setup and reviewers’ proneness to availability bias. More specifically, we examine whether priming a reviewer on a bug type (achieved by showing an existing review comment) biases the outcome of code review.
Our experiment was completed by 85 developers, 73% of which reported to have at least three years of professional development experience. We required each developer to conduct a code review in which an existing comment was either shown (treatment group) or
Based on the availability bias literature, we expected the primed participants (treatment group) to be more likely to find the bug of the same type (as it is already available in memory), but less likely to find the other bug type (since distracted by the comment). Surprisingly, instead, our results show that—for three out of four bugs—the code review outcome does not change between the treatment and control groups. After testing our results for robustness, we could find no evidence indicating that, for these three bugs, the outcome of the review is biased in the presence of an existing review comment priming them on a bug type. Only for one bug type, though, we have strong evidence that the behavior of the reviewers changed: When the previous review comment was about a type of bug that is normally not considered during developers’ coding/review practices (i.e., checking for NullPointerException on a method’s parameters), the reviewers were more likely to find the same type of bug with a strong effect.
Overall, we interpret the results of our experiment as an indication that existing review comments do not act as negative primers, rather as positive reminders. As such, our experiment provides evidence that the current collaborative code review practice, adopted by most software projects, could be more beneficial than separate individual reviews, not only in terms of efficiency and social advantages, but also in terms of its effectiveness in finding bugs.
2 BACKGROUND AND RELATED WORK
In this section, we review the literature on human aspects in contemporary code review practices, as well as studies on scientific peer review. Subsequently, we provide background on cognitive biases in general and present relevant studies in Software Engineering (SE). We also provide a separate subsection on availability bias, which consists of some theoretical background and existing research on availability bias in SE.
2.1 Human aspects in modern code review
Past research has provided evidence that human factors determine code review performance to a significant degree and that code review is a collaborative process [3]. Empirical studies conducted at companies such as Google [41] and Microsoft [3] revealed that, besides finding defects and ensuring maintainability, motivations for reviewing code are knowledge transfer (e.g., education of junior developers) and improving shared code ownership, which is closely related to team awareness and transparency.
Besides being a collaborative activity, code review is also demanding from a cognitive point of view for the individual reviewer. A large amount of research is focused on improving code review tools and processes based on the assumption that reducing reviewers’ cognitive load improves their code review performance [7, 50]. For instance, Baum et al. [9] argue that the reviewer and review tool can be regarded as a joint cognitive system, also emphasizing the importance of off-loading cognitive process from the reviewer to the tool. Ebert et al. [16] conducted a study to understand the factors that confuse code reviewers through manual analysis of 800 comments from code review of the Android project, and later they built a series of automatic classifiers (e.g., Multinomial Naive Bayes, OneR) for identification of confusion in review comments. Baum et al. [8] conducted experiments to examine the association of working memory capacity and cognitive load with code review performance. They found that working memory capacity is associated with the effectiveness of finding de-localized defects. However, authors could not find substantial evidence on the influence of change part ordering on mental load or review performance. Spadini et al. [46] designed and conducted a controlled experiment to investigate whether examining changed text code before the changed production code (also known as Test Driven Code Review or TDR) affects code review effectiveness. According to the findings of Spadini et al., developers adopting TDR find the same amount of defects in production code, but more defects in test code and fewer maintainability issues in the production code.
Significantly related to the work we present in this paper is the recent empirical observational study by Thongtanunam and Hassan [49]. They investigated the relationship between the evaluation decision of a reviewer and the visible information about a patch under review (e.g., comments and votes by prior co-reviewers) [49]. With an observational study on tens of thousands of patches from two popular open-source software systems, Thongtanunam and Hassan found that (1) the amount of feedback and co-working frequency between reviewer and patch author are highly associated with the likelihood of the reviewer providing a positive vote and that (2) the proportion of reviewers who provided a vote consistent with prior reviewers is significantly associated with the defect-proneness of a patch (even though other factors are stronger). These results corroborate the hypothesis that there is some sort of influence generated by the visible information about the change under review on the behavior of the reviewers [49]. In the work we present in this paper, we setup a controlled setting to investigate an angle of this influence further, hoping to shed more light on the causal connection between comments’ visibility and reviewers’ effectiveness.
2.2 Scientific peer review
Peer review is the main form of group decision making used to allocate scientific research grants and select manuscripts for publication. Many studies demonstrated that individual psychological processes are subject to social influences [15]. Such finding also points out some issues that might arise during group decision making. Experimental results obtained by Deutsch and Gerard [15] show that when a group situation is created, normative social influences grossly increase, leading to errors in individual judgment. Based on the findings of this study, it is emphasized that group consensus succeeds only if groups encourage their members to express their own, independent judgments. Therefore, one of the procedures for peer review of scientific research grant applications is ‘written individual review’ [34]. With this review procedure, reviewers judge the merit of a grant application in written form, independently of one another, before the final decision maker approves or rejects an application. Written individual review can mitigate the influence of reviewers on the way to reach a collective judgment. It is also used in scientific venues to eliminate biases. There is also another form of review procedure, namely panel peer review where a common
judgment is reached through mutual social exchange [34]. In panel peer review, a group of reviewers convene to jointly deliberate and judge the merit of an application before the funding decision is made. However, as also emphasized by Deutsch and Gerard [15], it is crucial to encourage individual members to express their own judgment without feeling under the pressure of normative social influences for proper functioning of group decision making.
2.3 Cognitive biases in software engineering
Cognitive biases are defined as systematic deviations from optimal reasoning [30, 47, 48]. In the past six decades, hundreds of empirical studies have been conducted showing the existence of various cognitive biases in humans’ thought processes [21, 48]. Although many theories explain why cognitive biases exist, Baron [6] stated that there is no evidence so far about the existence of a single reason or generative mechanism that can explain the existence of all cognitive bias types. Some theories see cognitive bias as the by-product of cognitive heuristics that humans developed due to their cognitive limitations (e.g., information processing power) and time pressure, whereas some relate them to emotions.
Human cognition is a crucial part of software engineering research since software is developed by people for people. In their systematic mapping study [30], Mohanini et al. report 37 different cognitive biases that have been investigated by software engineering studies so far. According to the results of this systematic mapping study, the cognitive biases that are most common in software engineering studies are anchoring bias, confirmation bias, and overconfidence bias. Anchoring bias results from forming initial estimates about a problem under uncertainty and focusing on these initial estimates without making sufficient modifications in the light of more recently acquired information [21, 47]. Anchoring bias has so far been studied in software engineering research within the scope of requirements elicitation [37], pair programming [19], software reuse [35], software project management [2], and effort estimation [25]. Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that affirms one’s prior beliefs or hypotheses [38]. The manifestations of confirmation bias during unit testing and how it affects software defect density have been widely studied in software engineering literature [11, 12, 24].
Any positive effect of experience on mitigation of confirmation bias has not been discovered so far [10]. However, in some studies, participants who have been trained in logical reasoning and hypothesis testing skills were manifested less tendency towards confirmatory behavior during software testing [10]. Ko and Myers identify confirmation bias among the cognitive biases that cause errors in programming systems [23]. Van Vliet and Tang indicate that during software architecture design, some organizations assign devil’s advocate so that one’s proposal is not followed without any questioning [52]. Overconfidence bias manifests when a person’s subjective confidence in their judgement is reliably greater than the objective accuracy of such a judgement [31]. This bias type has been studied within the context of pair programming [19], requirements elicitation [13] and project cost estimation [26].
Availability bias. Availability bias is the tendency to be influenced by information that can be easily retrieved from memory (i.e., easy to recall) [21]. The definition of availability bias was first formulated by Tversky and Kahneman [51], who conducted a series of experiments to explore this judgmenental bias. However, including these original experiments, many psychology experiments do not go beyond comparing two groups (i.e., controlled and test group) to differ in availability. To the best of our knowledge, in cognitive psychology literature, the only experiment providing evidence for the mediating process that manifests availability bias was devised by Gabrelcik and Fazio, who employed (memory) priming as the mediating process [18].
Availability bias has also been studied in SE research. De Graaf et al. [14] examined software professionals’ strategies to search for documentation by using think-aloud protocols. Authors claim that using incorrect or incomplete set of keywords, or ignoring certain locations while looking for documents due to availability bias might lead to huge losses. Mohan and Jain [29] claim that while performing changes in design artifacts, developers—due to availability bias—might focus on their past experiences, since such info can be easily retrieved from developers’ memory. However, such information might be inconsistent with the current state of the software system. Mohan et al. [29] propose traceability among design artifacts as a solution to mitigate the negative effects of the availability bias and other cognitive biases (i.e., anchoring and confirmation bias). Robins and Redmiles [39] propose a software architecture design environment reporting that it supports designers by addressing their cognitive challenges, including availability bias. Jørgensen and Sjøberg [20] argue that while learning from software development experience, learning from the right experiences might be hindered due to availability bias. Authors suggest retaining post-mortem project reviews to mitigate negative effects of availability bias.
Overall, existing literature points to the potential risks associated with availability bias in SE. As our community has provided evidence that code review is a collaborative and cognitively demanding process and that the collaborative nature of code review also has the potential to affect individual reviewers’ cognition, availability bias could manifest itself during the code review process. This bias could hamper code review effectiveness. In our study, we aim to explore how existing review comments bias the code review outcome.
3 EXPERIMENTAL DESIGN
In this section, we explain the design of our experiment.
3.1 Research Questions and Hypotheses
The paper is structured along two research questions. By answering these research questions, we aim to understand to what extent contemporary code review is robust to reviewers’ availability bias, depending on the nature of the bug for which a previous comment exists on the code change. Our first research question and the corresponding hypotheses follow.
RQ1. What is the effect of priming the reviewer with a bug type that is not normally considered?
We hypothesize that an existing review comment about a bug type that reviewers do not usually consider (such as a null value passed as an argument [4, 7, 40, 42]) might prime the reviewers towards this bug type, so they find more of these bugs. Also, we hypothesize that—due to such priming—reviewers overlook bugs on which they were not primed. Hence, our formal hypotheses are:
**H010:** Priming subjects with bugs they usually do **not** consider does not affect their performance in finding bugs of the same type.
**H011:** Priming subjects with bugs they usually do **not** consider does not affect their performance in finding bugs they usually look for.
We also explore how priming on a bug that is usually considered during code reviews affects review performance. Therefore, our second research question is:
**RQ2:** What is the effect of priming the reviewer with a bug type that is normally looked for?
We hypothesize that also in the case of an existing review comment about a bug type that reviewers usually consider primes the reviewers towards this bug type, so that they find more of these bugs. Also, we expect primed reviewers to only look for the type of bugs on which they are primed, overlooking others. Hence, our formal hypotheses are:
**H020:** Priming subjects with bugs they usually consider does not affect their performance in finding bugs of the same type.
**H021:** Priming subjects with bugs they usually consider does not affect their performance in finding bugs they usually do **not** look for.
### 3.2 Experiment Design and Structure
To conduct the code review experiment and to assess participants’ proneness to availability bias, we extend the browser-based tool CRExperiment [43]. The tool allows us to (i) visualize and perform a code review, (ii) collect data through questions asking for subjects’ demographics information as well as data consisting of participants’ interactions with the tool, (iii) collect data to measure subjects’ proneness to availability bias, by using a memory priming set-up to trigger subjects’ use of availability heuristic that is followed by a survey. Both the priming set-up and the survey are inherited from a classic experiment in cognitive psychology literature that was designed by Gabrielik and Fazio [18].
#### Code Review Experiment Overview
For the code review experiment, we follow independent measures design [22] augmented with some additional phases. The following stages in the browser-based tool correspond to the code review experiment:
1. **Welcome Page:** The welcome page provides participants with information about the experiment. This page also aims to avoid demand characteristics [33], which are cues and hints that can make the participants aware of the goals of this research study leading to change in their behaviour during the experiment. For this purpose, we do not inform the participants about the full purpose of the experiment, rather they are only told that the experiment aims to compare code review performance under different circumstances. Before starting the experiment, the subjects are also asked for their informed consent.
2. **Participants’ Demographics:** On the next page, subjects are asked questions to collect demographic information as well as confounding factors, such as: (i) gender, (ii) age, (iii) proficiency in the English language, (iv) highest obtained education degree, (v) main role, (vi) years of experience in software development, (vii) current frequency in software development, (viii) years of experience in Java programming, (ix) years of experience in doing code reviews, (x) current frequency of doing code reviews, and (xi) the number of hours subjects worked that day. It is kept mandatory that subjects answer these questions before proceeding to the next page where they will receive more information about the code review experiment they are about to take part in. We ask these questions to measure subjects’ real, relevant, and recent experience. Collecting such data helps us to identify which portion of the developer population is represented by subjects who take part in our experiment [17].
3. **Actual Experiment:** Each participant is then asked to perform a code review and is randomly assigned to one of the following two treatments:
- **Pr (primed)** – The subject is given a code change to review where there exists a review comment (made by a previous reviewer) about a bug in the code. The test group of our experiment comprises the subjects who are assigned to this treatment.
- **NPr (not–primed)** – The subject is given a code change to review. In the code change, there are no comments made by any other reviewers. The control group of our experiment comprises the subjects who are assigned to this treatment.
More specifically, the patch to review contains three bugs: two of the same type (i.e., BugA) and one of a different type (i.e., BugB). In the **Pr** group, the review starts with a comment made by another reviewer showing that one instance of BugA is present. The participant is then asked to continue the review. In the **NPr** group, the review starts without comments. The comments shown to the participants in the **Pr** group were written by the authors, and the wording was refined with the feedback from the pilots (Section 3.5). Each participant is asked to take the task very seriously. More specifically, we ask them to find as many defects as possible and, like in real life, spend as little time as possible on the review. However, unlike in real life, we ask them not to pay attention to maintainability or design issues, but only in correctness issues (“bugs”). For example, we discard comments regarding variable namings or small refactorings.
4. **Interruptions during the Experiment:** Immediately after completing the code review, the participants are asked whether they were interrupted during the task and for how long.
5. **Follow-up Questions:** In the last page of the code review experiment, the participants are shown the code change they just reviewed together with the bugs disclosed: For each bug, we show it and explain why it is a defect and in what cases
Instructions
We are now going to show you the code changes to review. The old version of the code is on the left, the new version is on the right.
For the scientific validity of this experiment, it is vital that the review task is taken very seriously.
- Like in real life, you should find as many defects as possible and you should spend as little time as possible on the review.
- Unlike in real life, we are not interested in maintainability or design issues, but only in correctness issues ("bugs").
For example, a remark like the following is beyond the goal of the review: "Create a new class which is implemented by runnable interface that we can access multiple times." Instead, what we are interested in are the defects that make the code not work as intended under all circumstances.
Please assure that the code compiles and that the tests pass.
You will see that a previous reviewer already put a comment in line 33. You are now asked to continue with your review.
To add a review, click on the corresponding line number. To delete a review mark, click on it again and delete the remark's text.
Figure 1: Example of a code review using the tool.
it might fail. Then, for each bug, we ask the participants to indicate whether they captured it in the review:
- If the participants found the bug and they belonged to the Pr group, we ask them to what extent the comment of the previous reviewer influenced the discovery of the bug (using a 5-point Likert scale).
- If the participants did not find the bug (independently whether they were in the Pr or NPr group), we ask them to elaborate on why they think they missed the bug.
Assessment of Proneness to Availability Bias. The code review experiment is followed by a set-up that primes participants’ memory to trigger availability bias. This set-up serves as a mediating process to manipulate availability bias so that we can measure the extent to which each subject is prone to this type of cognitive bias. To measure this phenomenon, we inherited the test part of the controlled experiment of Gabrielcik and Fazio [18]. In the original experiment, the difference in the results of control and test groups showed that (memory) priming triggered the participants’ availability bias. There are three reasons why we selected this experiment for assessing the proneness to availability bias: (i) To the best of our knowledge, it is the only experiment where the underlying cognition mechanism (i.e., memory priming) that triggers availability bias is explicitly devised; (ii) memory priming mechanism is also employed in code review experiment to trigger participants’ availability bias; and (iii) survey in the original experiment makes it possible to quantitatively assess participants’ proneness to availability bias. Therefore, the remaining stages in the browser-based tool comprise the following:
(1) Welcome Page: We provide a second welcome page in which, to avoid demand characteristics [33], the participants are told that they are about to participate in an experiment that aims to explore software engineers’ attention by testing a set of visual stimuli, instead of the actual goal.
(2) Warm-up Session: We proceed with a warm-up session in which participants are asked to focus on a series of 20 words flashing once each on the screen. The words are randomly selected from the English dictionary, and none of them contain the letter ‘T’. Each word flashes for 300 ms. At the end of the warm-up, we ask the participants to write three words they have seen and recall, and to make a guess if they do not remember them.
(3) Actual Psychology Experiment: After the warm-up, we proceed with the actual psychology experiment: this time, we show two series of 20 words, all of them including the
There are different reasons on why developers adopt this practice, do not rely on special technologies or frameworks/libraries. The objects of the study are represented by the code changes (or the parameters are not null: hence, we use it as the not normally considered bug that we investigate in RQ1. Instead, BugA in the second change (RQ2) does not regard a parameter, to make sure that it is bug type that normally developers look for in a review.
3.4 Variables and Measurement Details
We aim to investigate whether participants that are primed on a specific type of bug are more likely to capture only that type of bug. To understand whether the subjects did find the bug (i.e., the value for our dependent variables), we proceed with the following steps: (1) the first author of this paper manually analyzes all the remarks added by the participants (each remark is classified as identifying a bug or being outside of the study’s scope), then (2) the authors cross-validate the results with the answer given by the participants (as explained in Section 3.2, after the experiment the participants had to indicate whether they captured the bugs).
In Table 1, we represent all the variables of our model. The main independent variable of our experiment is the treatment (Pr or NPr). We consider the other variables as control variables, which also include the time spent on the review, the participant’s role, years of experience in Java and Code Review, and tiredness. Finally, we run a logistic regression model similar to the one used by McIntosh et al. [28] and Spadini et al. [46]. To ensure that the selected logistic regression model is appropriate for the available data, we first (1) compute the Variance Inflation Factors (VIF) as a standard test for multicollinearity, finding all the values to be below 3 (values should be below 10), thus indicating little or no multicollinearity among the independent variables, (2) run a multilevel regression model to check whether there is a significant variance among reviewers, but we found little to none, thus indicating that a single level regression model is appropriate, and, finally, (4) when building the model we added the independent variables step-by-step and found that the coefficients remained stable, thus further indicating little to no interference among the variables. For convenience, we include the script to our publicly available replication package [45].
Availability bias score. We calculate availability bias scores as in the original experiment by Gabrielcik and Fazio [18]. The frequency comparisons on the 9-point scale were scored by assignments of a value between +4 and −4. Positive numbers were assigned for ratings indicating that letter ‘T’ was contained in more words than the other letter, while negative numbers were assigned in favour of the other letter. We calculated the availability bias score for each participant as the average (and also median) of values for the 5 relevant questions.
3.5 Pilot Runs
As the first version of the experiment was ready, we started conducting pilot runs to (1) verify the absence of technical errors in the online platform, (2) check the ratio with which participants were able to find the injected bugs (regardless of their treatment group), (3) tune the experiment on the proneness to availability bias (in terms of flashing speed and number of words to ask), (4) verify the understandability of the instructions as well as the user interface, and (5) gather qualitative feedback from the participants. We conducted three different pilot runs, for a total of 20 developers. The participants were recruited through the professional network of the study authors to ensure that they would take the task seriously and
we introduced a donation-based incentive of five USD to a charity.
After the third run, the required changes were minimal, and we considered the experiment ready for its main run. To provide a small incentive to participate, we collected all such information and investigated how these factors (e.g., noise level and web searches) may have impact on the results. Hence, we screened each review and we did not consider experiments without any comments in the review, that took less than five minutes to be completed, or that were not performed seriously, we screened each review and we did not consider experiments without any comments in the review, that took less than five minutes to be completed, or that were not completed at all.
Regarding defects and code changes, the first author prepared the code changes and corresponding test codes as well as injecting the defects into these code changes. These were later checked by the other authors. Code change and corresponding test code were on the same page, and subjects had to scroll down to proceed to the next page of the online experiment. In this way, we aimed to ensure that subjects saw the test code. Test code were added to make the experiment closer to a real world scenario.
A major threat is that the artificial experiment created by us could differ from a real-world scenario. We mitigated this issue by (1) re-creating as close as possible a real code change (for example, submitting test code and documentation together with the production code), and (2) using an interface that is identical to the common Code Review tool Gerrit [1] (both our tool and Gerrit use Mergely [36] to show the diff, also using the same color scheme).
### Internal Validity. Threats to internal validity concern factors that might affect the cause and effect relationship that is investigated through the experiment. Due to the online nature of the experiment, we cannot ensure that our subjects conducted the experiments with the same set-up (e.g., noise level and web searches), however we argue that developers in real world settings also have a multi-fold of tools and environments. Moreover, to mitigate the possible threat posed by missing control over subjects, we included some questions to characterize our sample (e.g., experience, role, and education).
To prevent duplicate participation, we adjusted the settings of the online experiment platform so that each subject can take the experiment only once. To exclude participants who did not take the experiment seriously, we screened each review and we did not consider experiments without any comments in the review, that took less than five minutes to be completed, or that were not completed at all.
Furthermore, several background factors (e.g., age, gender, experience, education) may have impact on the results. Hence, we collected all such information and investigated how these factors affect the results by conducting statistical tests.
### External Validity. Threats to external validity concerns the generalizability of results. To have a diverse sample of subjects (representative of the overall population of software developers who
<table>
<thead>
<tr>
<th>Table 1: Variables used in the statistical model.</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Metric</strong></td>
</tr>
<tr>
<td>-------------------------------</td>
</tr>
<tr>
<td><strong>Dependent Variables</strong></td>
</tr>
<tr>
<td>FoundPrimed</td>
</tr>
<tr>
<td>FoundNotPrimed</td>
</tr>
<tr>
<td><strong>Independent Variable</strong></td>
</tr>
<tr>
<td>Treatment</td>
</tr>
<tr>
<td>Gender</td>
</tr>
<tr>
<td>Age</td>
</tr>
<tr>
<td>LevelOfEducation</td>
</tr>
<tr>
<td>Role</td>
</tr>
<tr>
<td>ProfDevExp</td>
</tr>
<tr>
<td>JavaExp</td>
</tr>
<tr>
<td>ProgramPractice</td>
</tr>
<tr>
<td>ReviewPractice</td>
</tr>
<tr>
<td>ReviewExp</td>
</tr>
<tr>
<td>WorkedHours</td>
</tr>
<tr>
<td>Tired</td>
</tr>
<tr>
<td>Stressed</td>
</tr>
<tr>
<td>Interruptions</td>
</tr>
<tr>
<td>TotalDuration</td>
</tr>
<tr>
<td>PsychoExpIsPrimed</td>
</tr>
</tbody>
</table>
(†) see Figure 2 for the scale
employ contemporary code review), we invited developers from several countries, organizations, education levels, and background.

### 5 RESULTS
In this section, we report the results of our investigation on whether and how having a comment from a previous reviewer influences the outcome of code review.
#### 5.1 Validating The Participants
A total of 243 people accessed our experiment environment following the provided link. From these participants, we exclude all the instances in which the code change is skipped or skimmed, by demanding either at least one entered remark or more than five minutes spent on the review. After applying the exclusion criteria, a total of 85 participants are selected for the subsequent analyses.
Figure 2 presents the descriptive statistics on what the participants reported in terms of their role, experience, and practice. The majority of the participants are programmer (67%) and reported to have many years of experience in professional software development (73% more than 3 years, 47% more than 6); most program daily (69%) and review code at least weekly (63%).
Table 2 represents how the participants’ are distributed across the considered treatments and code changes. The automated assignment algorithm allowed us to obtain a rather balanced number of reviews per treatment and code change.
#### 5.2 RQ1. Priming a not commonly reviewed bug
To investigate our first research question, the participants in our test group (Pr) are primed on a NullPointerException (NPE) bug in a method’s parameter. We expect this type of bug to be missed by most not primed reviewer, because normally reviewers would assume that parameters are checked from the calling function [4, 40, 42].
Table 3 reports the results of the experiment by treatment group. From the first part of the table (primed bug), we can notice that participants in the Pr group found the other NPE bug 62% of the times, while participants in the NPr group only 11%. Expressed in odds, this result means that the NPE defect is 12 times more likely to be found by a participant in the Pr group. The main reasons reported by the participants in the NPr for missing this bug are that (1) they were too focused on the logic and not thoroughly enough when it comes the corner cases, (2) did not put attention to the fact (1) they were too focused on the logic and not thoroughly enough when it comes the corner cases, (2) did not put attention to the fact that Integer could be null, and (3) they generally do not check for NPE, but assume to not receive a wrong object as an input.
As expected, even though NullPointerException has been reported to be the most common bug in Java programs [53], developers stated they rarely sanity check the Object. However, as shown in Table 3, the result drastically changes when a previous reviewer points out that an NPE could be raised: in this case, many of the participants in the Pr group looked for other NPE bugs in the code.
When we look at whether the Pr group was primed by the previous reviewer comment (hence whether they were able to capture the bug because of they have been primed), we have that 40% indicated they were ‘Extremely influenced’, 40% were ‘Very influenced’ and 20% instead were ‘Somewhat influenced’. Hence, the reviewers perceived to have been influenced by the existing comment.
We find a statistically significant relationship ($p < 0.001$, assessed using $\chi^2$) of strong positive strength ($\phi = 0.5$) between the
---
**Table 2: Distribution of participants ($N = 85$) across the various treatment groups.**
<table>
<thead>
<tr>
<th></th>
<th>Primed (Pr)</th>
<th>Not Primed (NPr)</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>CodeChange1</td>
<td>21</td>
<td>17</td>
<td>38</td>
</tr>
<tr>
<td>CodeChange2</td>
<td>22</td>
<td>25</td>
<td>47</td>
</tr>
<tr>
<td>Total</td>
<td>43</td>
<td>42</td>
<td>85</td>
</tr>
</tbody>
</table>
**Table 3: Odds ratio for capturing the primed and not primed bug in the test (Pr) and control (NPr) group.**
<table>
<thead>
<tr>
<th>Bug Type</th>
<th>Primed (Pr)</th>
<th>Not Primed (NPr)</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>NPE bug found</td>
<td>13</td>
<td>2</td>
<td>15</td>
</tr>
<tr>
<td>NPE bug not found</td>
<td>8</td>
<td>15</td>
<td>23</td>
</tr>
</tbody>
</table>
Odds Ratio: 12.19 (2.19, 67.94)
$p < 0.001$
<table>
<thead>
<tr>
<th>Bug Type</th>
<th>Primed (Pr)</th>
<th>Not Primed (NPr)</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>NPE bug found</td>
<td>14</td>
<td>14</td>
<td>28</td>
</tr>
<tr>
<td>NPE bug not found</td>
<td>7</td>
<td>3</td>
<td>13</td>
</tr>
</tbody>
</table>
Odds Ratio: 0.43 (0.09, 2.00)
$p = 0.275$
To investigate our second research question, the participants in our test group (Pr) are primed on an algorithmic bug, more specifically a corner case (CC) bug. The result of this experiment is shown in Table 5.
Table 5. Participants in both groups found the primed bug ~50%. Indeed, the difference is not statistically significant (p = 0.344). If we consider whether the test group was primed by the previous reviewer comment, 50% of the participants reported that they were 'Extremely influenced', 10% was 'Somewhat influenced' and 40% was slightly or not influenced; thus suggesting that even the reviewers noticed a lower influence from this comment, even though it was of the same type as one of the other two bugs in the same code change.
Among the main reasons for missing the bug, participants mainly stated that (1) the tests drove them to not remember that corner case, and (2) they focused more on the first one. Hence, given this result we can conclude that the participants who saw the review comment did not find the similar bug more often than the participants that did not see the review comment.
In the second part of Table 5, we indicate whether the participants were able to find the not primed bug. Both the test and control groups are very similar in this case, too. Indeed, in both groups the bug is found around 50% of the times and the difference is not statistically significant. When looking at the participants’ comments on why they missed this bug, we have that the main reasons are (1) that they forgot to try the specific corner case, and (2) that they assumed tests were covering all the corner cases. The reasons for not capturing the defects were similar in both groups. Given this result, we cannot reject \( H_{011} \). Priming the participants on a specific type of bug did not prevent them from capturing the other type of bug.
In Table 4 we show the result of our statistical model, taking into account the characteristics of the participants and reviews. The model confirms the result shown in Table 3: even taking into account all the variables, the \textit{isPrimed} variable is statistically significant exclusively for the primed bug. The other variable statistically significant in the model is ‘Interruptions’, that is the number of times the participant has been interrupted during the experiment: the estimate has a negative value, which means the higher the number of ‘Interruptions’, the lower the number of bugs captured, as one can expect.
For the not primed bug instead, none of the variables are statistically significant (with 'TotalDuration' and 'ReviewExp' are slightly significant, with \( p < 0.1 \))
\begin{table}[h]
\centering
\caption{Regressions for primed and not primed bugs.}
\begin{tabular}{|l|ccc|ccc|}
\hline
& \multicolumn{3}{c|}{Primed bug} & \multicolumn{3}{c|}{Not primed bug} \\
& Estimate & S.E. & Sig. & Estimate & S.E. & Sig. \\
\hline
Intercept & 0.704 & 4.734 & -0.893 & 4.093 & -0.696 & 4.093 \\
IsPrimed & 3.627 & 1.320 & \* & -1.199 & 1.073 & . \\
TotalDuration & 0.001 & 0.002 & . & 0.003 & 0.001 & . \\
ProfDevExp & 0.813 & 0.557 & . & -0.503 & 0.554 & . \\
ProgramPractice & -0.096 & 0.828 & . & -0.243 & 0.736 & . \\
ReviewExp & -0.070 & 0.630 & . & -0.813 & 0.651 & . \\
ReviewPractice & -1.152 & 0.758 & . & 1.243 & 0.643 & . \\
Tired & -0.834 & 0.832 & . & 0.517 & 0.651 & . \\
WorkedHours & -0.069 & 0.196 & . & 0.305 & 0.207 & . \\
Interruptions & -1.752 & 0.758 & * & 0.715 & 0.444 & . \\
\hline
\end{tabular}
\end{table}
\textbf{Finding 1.} Reviewers primed on a bug that is not commonly considered are more likely to find other occurrences of this type of bugs. However, this does not prevent them in finding also other types of bugs.
\subsection*{5.3 RQ2. Priming on an algorithmic bug}
To investigate our second research question, the participants in our test group (Pr) are primed on an algorithmic bug, more specifically a corner case (CC) bug. The result of this experiment is shown in Table 5. Participants in both groups found the primed bug ~50%. Indeed, the difference is not statistically significant (\( p = 0.344 \)). If we consider whether the test group was primed by the previous reviewer comment, 50% of the participants reported that they were 'Extremely influenced', 10% was 'Somewhat influenced' and 40% was slightly or not influenced; thus suggesting that even the reviewers noticed a lower influence from this comment, even though it was of the same type as one of the other two bugs in the same code change.
Among the main reasons for missing the bug, participants mainly stated that (1) the tests drove them to not remember that corner case, and (2) they focused more on the first one. Hence, given this result we can conclude that the participants who saw the review comment did not find the similar bug more often than the participants that did not see the review comment.
In the second part of Table 5, we indicate whether the participants were able to find the not primed bug. Both the test and control groups are very similar in this case, too. Indeed, in both groups the bug is found around 50% of the times and the difference is not statistically significant. When looking at the participants’ comments on why they missed this bug, we have that the main reasons are (1) that they forgot to try the specific corner case, and (2) that they assumed tests were covering all the corner cases. The reasons for not capturing the defects were similar in both groups. Given this result, we cannot reject \( H_{011} \). Priming the participants on a specific type of bug did not prevent them from capturing the other type of bug.
In Table 4 we show the result of our statistical model, taking into account the characteristics of the participants and reviews. The model confirms the result shown in Table 3: even taking into account all the variables, the \textit{isPrimed} variable is statistically significant exclusively for the primed bug. The other variable statistically significant in the model is ‘Interruptions’, that is the number of times the participant has been interrupted during the experiment: the estimate has a negative value, which means the higher the number of ‘Interruptions’, the lower the number of bugs captured, as one can expect.
For the not primed bug instead, none of the variables are statistically significant (with 'TotalDuration' and 'ReviewExp' are slightly significant, with \( p < 0.1 \))
\begin{table}[h]
\centering
\caption{Regressions for primed and not primed bugs.}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& IsPrimed & TotalDuration & ProfDevExp & ProgramPractice & ReviewExp & ReviewPractice \\
\hline
\textbf{Interruptions} & -1.752 & 0.758 & * & -0.715 & 0.444 & . \\
\textbf{Tired} & -0.834 & 0.832 & . & 0.517 & 0.651 & . \\
\textbf{WorkedHours} & -0.069 & 0.196 & . & 0.305 & 0.207 & . \\
\textbf{Interuptions} & -1.752 & 0.758 & * & 0.715 & 0.444 & . \\
\hline
\end{tabular}
\end{table}
\textbf{Finding 2.} Reviewers primed on an algorithmic bug perceive an influence, but are as likely as the others to find algorithmic bugs. Furthermore, primed participants did not capture fewer bugs of the other type.
Table 6: Regressions for primed and not primed bugs.
<table>
<thead>
<tr>
<th></th>
<th>Primed bug Estimate</th>
<th>S.E.</th>
<th>Sig.</th>
<th>Not primed bug Estimate</th>
<th>S.E.</th>
<th>Sig.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intercept</td>
<td>-1.0510119</td>
<td>2.2409628</td>
<td></td>
<td>-1.037e-01</td>
<td>2.560e+00</td>
<td></td>
</tr>
<tr>
<td>IsPrimed</td>
<td>0.9260383</td>
<td>0.7223408</td>
<td></td>
<td>-1.670e-01</td>
<td>7.740e-01</td>
<td></td>
</tr>
<tr>
<td>TotalDuration</td>
<td>0.0018592</td>
<td>0.0008958</td>
<td>*</td>
<td>9.561e-01</td>
<td>9.979e-04</td>
<td></td>
</tr>
<tr>
<td>ProfDevExp</td>
<td>-0.6031309</td>
<td>0.3381302</td>
<td></td>
<td>-9.437e-02</td>
<td>3.721e-01</td>
<td></td>
</tr>
<tr>
<td>ProgramPractice</td>
<td>0.5319636</td>
<td>0.5905427</td>
<td></td>
<td>-1.061e-00</td>
<td>7.535e-01</td>
<td></td>
</tr>
<tr>
<td>ReviewExp</td>
<td>0.3411589</td>
<td>0.4548836</td>
<td></td>
<td>1.284e-01</td>
<td>4.660e-01</td>
<td></td>
</tr>
<tr>
<td>ReviewPractice</td>
<td>0.1531502</td>
<td>0.3784472</td>
<td></td>
<td>1.211e+00</td>
<td>4.683e-01</td>
<td>**</td>
</tr>
<tr>
<td>Tired</td>
<td>0.0835410</td>
<td>0.3706085</td>
<td></td>
<td>2.486e-01</td>
<td>4.598e-01</td>
<td></td>
</tr>
<tr>
<td>WorkedHours</td>
<td>-0.1619234</td>
<td>0.1184626</td>
<td></td>
<td>2.257e-01</td>
<td>1.542e-01</td>
<td></td>
</tr>
<tr>
<td>Interruptions</td>
<td>-0.1755183</td>
<td>0.3220796</td>
<td></td>
<td>-1.331e-01</td>
<td>3.630e-01</td>
<td></td>
</tr>
</tbody>
</table>
(significance codes: **** p < 0.0001, *** p < 0.001, ** p < 0.01, * p < 0.1)
(1) Role is not significant and omitted for space reason
5.4 Robustness Testing
In the previous sections, we presented the results of our study on whether and to what extent reviewers can be primed during code review by showing an existing code review comment. Surprisingly, the results showed that many of our hypotheses were not satisfied: in our experiment, only in one case primed reviewers captured more bugs than the not primed group; in all the other cases, reviewers from both groups could capture the same bugs.
To further challenge the validity of these findings, in this section, we employ robustness testing [32]. For this purpose, we test whether the results obtained by our baseline model hold when we systematically replace the baseline model specification with the following plausible alternatives.
**Bugs were too simple or too complicated to find.** Choosing the right defects to inject in the code change is fundamental to the validity of our results. If a defect is too easy to find, participants might find the bugs regardless of any other influencing factor, even without paying too much attention to the review (on the other hand, if it is too complicated reviewers might not find any bug and get discouraged to continue). We measure that ~50% of the participants found the three types of defects that we expected them to find, thus ruling out the possibility that these bugs were either too trivial or too difficult to find.
**People were not primed.** The entire experiment is based on the premise that reviewers in the Pr group were correctly primed. Even though we cannot verify this premise (the experiment is online, hence there is no interaction between the researchers and the participants), after the code review experiment the participants had to indicate whether they were influenced by the comment of the previous reviewer in capturing the bug. As we stated in Section 5.2 and Section 5.3, 70% of the participants indicated they were extremely or very influenced, while only 18% indicated somewhat or slightly influenced (12% were neutral). This gives an indication that the participants felt they were indeed primed, but this did not influence their ability to find other bugs.
Nevertheless, the reported level of being influenced is subjective, so not fully reliable (participants could think to have been influenced, but were not). To triangulate this result, we test another possibility: More specifically, one of the possible explanations of why participants may have not been primed is that our sample of participants was “immune” to priming or very difficult to prime. Indeed, there is no study that confirms that developers are as affected by priming as the general population (on which past experiment was conducted). To rule out this possibility, we devised the psychological experiment: We tested whether developers can also be primed as expected using visual stimuli. Our results show that ~70% of the participants were primed as expected.
**Not enough participants.** Another possibility of why we do not find a difference is that we did not have enough participants. Even though 85 participants is quite large in comparison to many experiments in software engineering [8] and we tried to design an experiment that would create a strong signal, we cannot rule out that the significance was missing due to the number of participants. However, even if the results were statistically significant (assuming we had the same ratios, but an order of magnitude more of participants), the size of the effect (calculated using the $\phi$ coefficient) would be ‘none to very negligible’. This suggests that there was no emerging trend and that, even having more participants, we could have probably obtained a significant, yet trivial effect.
**Some participants did not perform the task seriously.** Finally, one of the reasons why we did not confirm most of our hypotheses could be that some participants did not take the task seriously, hence they might have performed poorly and have altered the results. Having used a random assignment and having a reasonably large number of participants, we have no reason to think that one group had more ‘lazy’ participants than the others. Moreover, as we discussed in Section 3, to exclude participants who did not take the experiment seriously, we filtered out experiments without any comments in the review (even if there were comments, the first author manually validated them to check whether they were appropriate and they were/not capturing a bug); we also did not consider reviews that took less than five minutes to be completed, or that were not completed at all (maybe because the participant left after few minutes).
Alternatively, it would be possible that participants who were more serious focused more and found more bugs (regardless of the priming), while less serious ones would just find one and leave the experiment. To test also this possibility, we compared the likelihood of a participant in finding a second bug when a first one was found. Also in this case, we did not find any statistically significant effect, thus ruling out this hypothesis as well.
6 DISCUSSIONS
We discuss the main implications and results of our study.
**Robustness of code review against availability bias.** The current code review practice expects reviewers to review and comment on the code change asynchronously, and reviewers’ comments are immediately visible to both the author and other reviewers.
One of the main hypotheses we stated in our study is that the code review outcome is biased because reviewers are primed by the visibility of existing comment on a bug. Indeed, if reviewers get primed by previously made comments about some bug(s), then they could find more bugs of that specific type while overlooking other types of bugs. This would, in turn, undermine the effectiveness of the code review process, creating a demand for a different approach.
To create a different approach, one might consider adopting a review method similar to that of scientific venues where reviewers do not see the comments of the other reviewers until they submit their review. Even though this strategy would reduce the transparency of the code review process undermining knowledge transfer, team awareness, as well as shared code ownership, and would probably lead to a loss in review efficiency due to duplicate bug detection, it would be necessary if the biasing effect of other reviewers’ comments would be strong.
Our experiment results show that the participants in the test group were positively influenced by the existing comment on the code change so that they could capture more bugs of the same type. However, unexpectedly, they were still able to capture the bugs of the different type as the control group did. Like any human, reviewers are also prone to availability bias [21] to various extents. However, our results did not find evidence of a strong negative effect of reviewers’ availability bias. Therefore, our data does not provide any evidence that would justify a change in the current code review practices.
Existing comments on normally not considered bugs act as (positive) reminders rather than (negative) primers. Surprisingly, participants in the test group who were primed with the algorithmic bug type (more specifically, a corner case bug) detected the same amount of corner case and NullPointerException (NPE) bugs as the participants in the control group. However, participants who were primed with a bug that is normally not considered in review (i.e., NPE) were 12 times more likely to capture this type of bug, than the participants of the control group.
This result shows that existing reviewer comments on code change seem to support recalling (i.e., act as a reminder), rather than distracting the reviewer. As previously mentioned in section 5.2, participants in the test group indicated that they were focused on to the corner cases in the code change and did not put attention to the possibility that Integer could be null. Such feedbacks are in-line with the possible existence of anchoring bias [21, 47].
It is likely that the existence of a reviewer comment on an uncommon bug had a de-biasing effect on the participants in the test group (i.e., mitigated the participants’ bias). In software engineering literature, there are empirical studies on practitioners’ anchoring bias. For instance, Pitts and Brown [37] provide procedural prompts during requirements elicitation to aid analysts not anchoring on currently available information. According to the findings by Jain et al. [19], pair programming novices tend to anchor to their initial solutions due to their inability to identify such a wider range of solutions. However, to the best of our knowledge, there are no studies on anchoring bias within the context of code reviews. Therefore, further research is required to investigate underlying cognition mechanisms that can explain why existing reviewer comments on the unexpected bug act as reminders.
7 CONCLUSIONS
We investigated robustness of peer code review against reviewers’ proneness to availability bias. For this purpose, we conducted an online experiment with 85 participants. Although the majority of the participants (i.e., ~70%) were assessed to be prone to availability bias (median = 3.8, max = 4), we did not observe any priming effect of existing review comments on bugs. However, reviewers primed on bugs not normally considered in code review were more likely to find more of this type of bugs. Hence, existing comments on this type of bugs acted as reminders rather than primers.
ACKNOWLEDGMENTS
This project has received funding from the European Union’s Horizon 2020 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 642954. A. Bachelli gratefully acknowledge the support of the Swiss National Science Foundation through the SNF Project No. PP00P2_170529.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/69829225/priming.pdf", "len_cl100k_base": 12703, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 41931, "total-output-tokens": 14910, "length": "2e13", "weborganizer": {"__label__adult": 0.00044608116149902344, "__label__art_design": 0.00038361549377441406, "__label__crime_law": 0.00031757354736328125, "__label__education_jobs": 0.00252532958984375, "__label__entertainment": 6.222724914550781e-05, "__label__fashion_beauty": 0.0001767873764038086, "__label__finance_business": 0.00024068355560302737, "__label__food_dining": 0.0003070831298828125, "__label__games": 0.0007123947143554688, "__label__hardware": 0.0004668235778808594, "__label__health": 0.0003888607025146485, "__label__history": 0.00018644332885742188, "__label__home_hobbies": 8.761882781982422e-05, "__label__industrial": 0.0002282857894897461, "__label__literature": 0.0003459453582763672, "__label__politics": 0.00026226043701171875, "__label__religion": 0.00039076805114746094, "__label__science_tech": 0.00339508056640625, "__label__social_life": 0.0001367330551147461, "__label__software": 0.004474639892578125, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00031566619873046875, "__label__transportation": 0.0003659725189208984, "__label__travel": 0.00018358230590820312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63091, 0.04853]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63091, 0.19434]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63091, 0.93239]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3437, false], [3437, 10147, null], [10147, 16723, null], [16723, 22910, null], [22910, 26682, null], [26682, 30432, null], [30432, 35551, null], [35551, 40215, null], [40215, 47396, null], [47396, 54976, null], [54976, 63091, null], [63091, 63091, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3437, true], [3437, 10147, null], [10147, 16723, null], [16723, 22910, null], [22910, 26682, null], [26682, 30432, null], [30432, 35551, null], [35551, 40215, null], [40215, 47396, null], [47396, 54976, null], [54976, 63091, null], [63091, 63091, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63091, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63091, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3437, 2], [3437, 10147, 3], [10147, 16723, 4], [16723, 22910, 5], [22910, 26682, 6], [26682, 30432, 7], [30432, 35551, 8], [35551, 40215, 9], [40215, 47396, 10], [47396, 54976, 11], [54976, 63091, 12], [63091, 63091, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63091, 0.18631]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
406b68eedd17e25d379748ab0fba6aae5e4b2758
|
A Scalable Multiagent Platform for Large Systems
Juan M. Alberola, Jose M. Such, Vicent Botti, Agustín Espinosa and Ana García-Fornes
Departament de Sistemes Informàtics i Computació
Universitat Politècnica de València Camí de Vera s/n. 46022, València (Spain)
{jalberola,jsuch,vbotti,aespinos,agarcia}@dsic.upv.es
Abstract. A new generation of open and dynamic systems requires execution frameworks that are capable of being efficient and scalable when large populations of agents are launched. These frameworks must provide efficient support for systems of this kind, by means of an efficient messaging service, agent group management, security issues, etc. To cope with these requirements, in this paper, we present a novel Multiagent Platform that has been developed at the Operating System level. This feature provides high efficiency rates and scalability compared to other high-performance middleware-based Multiagent Platforms.
Keywords: Multiagent Platforms, Multiagent Systems, Evaluation.
1. Introduction
In the last decade, due to the rapid growth of the Internet, the speed of change, and an ever greater amount of easily accessible information, the next generation of Multiagent Systems (MAS)s and information technology, will target open and large systems. In these dynamic and heterogeneous environments, it is essential that features such as security, high performance, scalability, and interoperability are provided by application development frameworks.
Even though current Multiagent Platforms (MAP)s support the development and execution of MASs, very few real applications have been developed to focus on open and dynamic systems. These applications change quickly and require features such as reliability, scalability, and performance, which not many MAPs are designed to offer. According to [25], agent researchers should design and implement large software systems consisting of hundreds of agents and not only systems composed of a few agents. In order to develop these systems, researchers require efficient and scalable MAPs.
Some current MAPs are not suitable for executing complex systems because their designs are not oriented to improving efficiency and scalability issues. Previous studies have demonstrated a degradation in the performance of current MAPs as the system grows [51, 22]; some MAPs even fail [49]. Our main objective for this paper is to propose a MAP that is focused on being scalable and efficient. One of our main design decisions is to use the operating
system (OS) services to develop this MAP instead of using middlewares between the OS and the MAP. In [14] we proved that this can noticeably improve the performance and scalability of the system.
Functionality is another important issue when executing large systems. Works by other researchers such as [20] are helpful in determining the main requirements for designing a MAP. By using theoretical proposals and methodologies [27], a MAP that supports agent organizations helps to simplify, structure, coordinate, and easily develop large applications, which are composed of thousands of agents. Standard language communication is another key requirement for allowing the interaction between heterogeneous entities. Support to coordinate communication is another requirement for these systems [42]. Definition of standard speech acts that agents can use, a common ontology to describe and access services, policies associated to agent conversations, and standard communication language are some features that should be provided. Finally, security concerns become important in large systems must be addressed if these systems are open in order to ensure the communications and the identities of each entity. As stated by other authors in [45], these features should be provided by agent execution frameworks.
Towards these goals, in this paper, we present a MAP that is oriented to fulfilling the requirements for this new kind of systems. This MAP is mainly focused on scalability and efficiency for executing large MASs. It provides mechanisms to support agent organizations, security concerns (authentication, authorization, and integrity), a standard language of communication for information representation, conversation-oriented interactions, and so on.
The rest of the article is organized as follows. Section 2 presents the motivation and the previous work that allowed us to design and develop an efficient and scalable MAP. Section 3 gives an in-depth description of the MAP architecture. Section 4 details the services offered by the MAP. Section 5 describes how agents in this MAP are represented. Section 6 describes a tourism service application that is built on this MAP. Section 7 presents a performance evaluation of the MAP. And finally, in Section 8, we present some concluding remarks.
2. Motivation and previous work
In the last few years, many researchers have focused on testing the performance of existing MAPs. One of the main properties tested in these works is the performance of the MAPs for sending messages. Vrba [51] presents an evaluation of the messaging service performance of four MAPs. From the tests presented in that paper, the author concludes that Jade [19] provides the most efficient messaging service compared to FIPA-OS [1], Jack [3], and ZEUS [12]. However, the design features that produce this performance are not given and the implementations of the messaging service for each MAP are not detailed. Therefore, these conclusions can only be valid to choose the MAP that performs better than the other three MAPs tested. Burbeck et al. [22] tested the messaging service performance of three MAPs. They claim that Jade performs better
than Tryllian [11] and SAP [9] because it is built on Java RMI\(^1\), but they give no proofs confirm this claim. As these works state, Jade is more scalable than other MAPs and can be considered to be a stable MAP for large systems [40]. However, these conclusions do not provide any clue to MAP developers about how to improve MAP designs since these experiments only scale up to 100 pairs of agents and a few hosts. A more thorough study is required to be able to assess MAP performance and to determine to what extent design decisions influence MAP performance.
Some other works have tested the performance of other services but only for a single MAP. Most of these works test Jade, which seems to be the most widely used MAP. In [25], the authors tested Jade messaging, agent creation, and migration services. The tests that they performed related to the messaging service only scale up to eight agent pairs. In [17], an evaluation of a MAS for adapting application’s behaviour was carried out on Jade MAP. The work presented in [26] tested the scalability and performance of the Jade messaging service. Similar to the works cited above, their conclusions do not provide any design decision. Even though these conclusions can allow MAS developers to check whether or not Jade fulfills their requirements when designing a MAS, they do not suggest any design decision for MAP developers.
There are also other works that focus on testing the performance of a specific MAS that is running on top of a MAP. In [23] the performance of MAPs is measured when a MAS composed of several web agents is launched. This MAS provides documents requested by a user agent. The authors measured the number of documents requested per unit of time. Therefore, their conclusions are only valid for this MAS. Lee et al. [37] present a MAS in which agents coordinate with each other to carry out tasks. They evaluate how the topological relations between agents affect the number of CPU cycles needed to accomplish these tasks. In [28], the authors compare the response time and the CPU cycles of SACI [13] and Jade.
Finally, other studies focus on detailing the functional properties of MAP. In [20], four MAPs are compared according to several criteria: implementation languages, tools provided, agent deliberation capabilities, etc. Shakshuki [46] presents a methodology to evaluate MAPs based on several criteria such as availability, environment, development, etc. Similar work is carried out by Nguyen [33], and Omicini [43] gives a brief evolution of MAPs. In other works such as [34, 44], different MAPs that are intended to be scalable are proposed; however, no empirical evaluation is carried out. These works provide ratings of properties provided by MAPs in order to help users choose the MAP according to their needs. Our work goes a step further since it is not only intended to be useful for MAP users but also for MAP developers.
A general conclusion of works that focus on MAP evaluation is that MAP performance decreases as the system grows. Furthermore, as we showed in a previous work [49], when large-scale MAS are taken into account, the performance of many MAPs is considerably degraded when the size of the system
\(^1\) [http://java.sun.com/docs/books/tutorial/rmi/index.html](http://java.sun.com/docs/books/tutorial/rmi/index.html)
executed increases, causing some MAPs to even fail. Therefore, current MAPs are not suitable for executing large population systems because their designs are not aimed at improving efficiency and scalability issues.
In order to develop a design in accordance with our goals, we detail other previous works that we carried out that focus on finding design decisions that influence MAP performance. In [41], we presented experiments to link performance with internal MAP designs, that is, to identify the key design decisions that lead to better performance. We extracted some conclusions from these experiments, such as the fact that centralizing services in a single host of the MAP degrades the performance causing this host to become a bottleneck in the case of very popular services. It is more suitable to design a distributed approach with efficient information replication mechanisms. In [16], we tested several issues of the MAPs, such as the performance of the directory service proposed by FIPA [2], the memory consumed by the agents and the MAP, the network occupancy rate, the CPU cycles, etc. According to these studies, the most influential point in the MAP performance that could become a bottleneck is the messaging service. This service is crucial in the performance of the MAP since agents need to exchange messages with other agents and access MAP services. Furthermore, some MAPs (such as Jade) base other MAP services (such as the Agent Directory or Service Directory proposed by FIPA) on the messaging service. In [14], we specifically analyzed technologies for implementing the Message Transport System (MTS), which is the component of the MAP that manages the message exchanges among the agents running on the MAP. This work showed that in order to design a messaging service that can handle large agent populations, the design that performs better should be based on direct communication between each pair of agents so that the messaging service scales better and performs more efficiently, especially in these sorts of scenarios.
In the following sections, we present a MAP focused in being scalable and efficient in more detail. It has been developed using the services offered by the OS to support MAS efficiently. By bringing MAP design closer to the OS level we can define a long-term objective, i.e., to incorporate the agent concept into the OS itself in order to offer a greater abstraction level than current approaches.
3. Magentix Multiagent Platform architecture
Magentix\(^2\) MAP aims to be scalable and efficient, mainly when it is executing large-scale MAS. To achieve a response time closer to the achievable time lower bound, this MAP has been developed using the services provided by the OS. Thus, one of the design decisions is that this MAP is written in C over the Linux OS. Current approaches for developing MAPs are based on interpreted languages like Java or Python. These MAP designs are built over middlewares like the Java Virtual Machine (JVM) [21]. Although these middlewares offer some advantages like portability and easy development, MAPs developed over them
\(^2\) Magentix can be downloaded from http://gti-ia.dsic.upv.es/sma/tools/Magentix/index.php
do not perform as well as one might expect, especially when they are running large systems. In [14] we presented a performance evaluation related to this issue. We proved in that using the Operating System (OS) services to develop a MAP instead of using middlewares between the OS and the MAP noticeably improves the performance and scalability of the MAP. Thus, we can see the MAP functionality as an extension of the functionality offered by the OS.
The Magentix communication service has been developed to offer high performance. This service is quite crucial to the performance of the MAP as we stated in Section 2 and some other services may be implemented using it. Magentix also provides advanced communication mechanisms such as agent groups, a manager to execute interaction protocols, and a security mechanism to provide authentication, integrity, confidentiality, and access control. This design has been developed in order to provide the functionality required by MAS and perform efficiently.
Magentix is a distributed MAP composed of a set of computers executing Linux OS (figure 1). Magentix uses replicated information on each MAP host to achieve better efficiency. Each one of these computers presents a process tree structure. The initial design of this structure is presented in [15]. The advantage of process tree management offered by Linux, and using some services like signals, shared memory, execution threads, sockets, etc. provides a suitable scenario for developing a robust, efficient, and scalable MAP.
The structure of each Magentix host is a three-level process tree. On the higher level we see the main process. This process is the first one launched on any host when this host is added to the MAP. Below this level we can see the services level. Magentix provides some services to support agent execution: Agent Management System (AMS), Directory Facilitator (DF), and Organizational Unit Manager (OUM). Services are represented by means of service agents replicated in every MAP host. Agents representing the same service manage replicated information and communicate with each other in order to keep this information updated. Finally, in the third level, user agents are placed. Using this process tree structure, main process manages service agents completely, i.e., it can kill any service agent to achieve a controlled shutdown of the MAP, and also detects at once whether any service agent dies. In the same way, ams agent has a broad control of the user agents of its own host.
Each user agent is represented by a different Linux child process of the ams agent running on the same host. This design decision was taken after efficiency tests as we stated in Section 2. Mapping one-to-one agents and Linux processes provides us with a complete execution control (as we will see in the next section) and a fast message exchanging mechanism. It could be argued that using a single virtual machine for executing agents represented as Java threads could be lighter. Nevertheless, this virtual machine could be overloaded when running three or four thousand agents, by the limitations of the virtual machine. In our proposal, mapping agents as Linux processes restricts us to the limitations of the OS, and allows us to run more than seven thousand agents in a single host. Developing a MAP by using the OS services directly allow us to
Fig. 1. Platform structure: Agent Management System (AMS), Directory Facilitator (DF), Organizational Unit Manager (OUM)
improve the efficiency of the system. Current Magentix version offer support to different Linux distributions (such as Ubuntu, Fedora, CentOs or OpenSuse) as well as to Mac OS. Interoperability between heterogeneous agents is reached by means of standard communication language representation and ontologies for service interactions.
3.1. Communication and Message Transport System
Magentix provides a message-based communication mechanism in order to allow interactions between agents and services. This communication mechanism aims to obtain both good efficiency level and MAP scalability. As Magentix MAP is integrated into Linux, we have checked different alternatives available for communicating processes in an OS context [14]. In this study we have analyzed different communication services among processes provided by POSIX [10] compliant OS, in particular, the Linux OS, in order to select which of these services allows robust, efficient and scalable MAPs to be built. As a result of the evaluation, a lower bound of the time needed to communicate process couples (located in the same or different hosts) was obtained. In these studies, we showed to what extent the performance of a Message Transport System (MTS) degrades when its services are based on middlewares between the OS and the MAP (like the JVM) rather than directly by the underlying OS. Thus, the Magentix MTS design was tested to be as close as possible to this time lower bound.
As we pointed out in section 2, the messaging service design that should perform better would be one based on direct communication between each pair of agents. Therefore, the communication mechanism implemented in message exchanging interactions is carried out by means of point to point connec-
tions based on TCP sockets, between a pair of processes. This mechanism enables high scalability in agent communication. Each Magentix agent has a server socket for receiving connections from other agents by means of client sockets. To carry out a new connection an agent creates a client socket that communicates with the remote agent server socket. Thus, Magentix agents are client/server at the same time.
At a lower level, Java-RMI technology (used for development communication in most of MAPs based on Java) uses TCP sockets. After evaluating different alternatives, we finally define the communication mechanism implemented in Magentix as point to point connections based on TCP sockets, between a pair of processes. The use of C language to develop the MAP, allows us to use this technology closer to the OS level, and avoid the overhead resulting from the use of Java-RMI, because the agent abstraction provided by a MAP is independent of the underlying communication mechanism implementation.
In our previous studies, we have also checked that opening a P2P connection between a pair of agents the first time they interact and leaving this connection open for future interactions is much more efficient than opening a new TCP connection each time they want to interact. Therefore two agents could have an indefinitely open connection for exchanging messages each time they require it. Nevertheless, the number of simultaneous open connections is limited by the OS. Therefore, each agent and service stores its open connections in a connection table. The first time an agent contacts another one, a TCP connection is established and remains open to exchange messages in the future. These connections are automatically closed when the conversation is not active, that is, some time has passed since the last message was sent, according to a LRU (Last Recently Used) policy (this mechanism is described in more depth in [50]). This connection table improves communication times since an agent does not need to create a new TCP connection each time it wants to communicate with another agent.
4. Services
In this section we describe the services that are implemented in Magentix oriented to agents, services, and group management: AMS service, DF service and OUM service.
4.1. Agent Management System
Agent Management System (AMS) service is defined by FIPA [29] and offers the white pages functionality. This service stores the information regarding the agents that are running on the MAP. AMS service is distributed among every MAP host. Therefore, information regarding the agents of the MAP is replicated in each host. This service is represented as ams agents running in each host of the MAP.
As we stated in section 3, all of the agents launched in a specific host are represented by means of child processes of the \textit{ams} agent. Just as the \textit{main} process behaves, the \textit{ams} agent has a broad control of the agents in its corresponding host. The management of starting and finalizing agents is automatically carried out by means of sending signals.
The \textit{AMS} service stores the information regarding every agent running on the MAP. This service allows us to obtain the physical address (IP address and port), providing the agent name to communicate with. Due to the fact that the \textit{AMS} service is distributed among every MAP host, each \textit{ams} agent running on each host contains the information needed to contact every agent of the MAP. Hence, the operation of searching agent addresses is not a bottleneck as each agent looks this information up in its own host, without needing to make any requests to centralizing components. Every time an agent is started or finalized in a host, this update is replicated on each host of the MAP. Nevertheless, there is another information regarding agents that does not need to be replicated when it is updated. For this reason, the \textit{ams} agents manage two tables of information: the Global Agent Table (GAT) and the Local Agent Table (LAT).
- **GAT**: Stored in this table is the name of each agent in the MAP and its physical address, that is, its IP address and its associated port.
- **LAT**: In this table additional information is stored such as the agent's owner, the process PID which represents each agent and its life cycle state.
The GAT is mapped on shared memory. Every agent has read only access to the information stored in the GAT of its own host. Each time an agent needs to obtain the address of another agent in order to communicate, it accesses the GAT without making any request to the \textit{ams} agent. Thus, we avoid the bottleneck of requesting centralizing components each time one agent wants to communicate with another. The information contained in the GAT needs to be replicated in each host to achieve better performance. Although replication mechanisms imply an overhead in the system, this overhead is reduced as only the updated information is replicated, and these updates occur when agents are started or dead in the MAP, operations that generally occur in low frequency rates. Thus, the overhead resulting from replication is worthwhile in order to distribute the information and make it available in each host of the MAP. Moreover, the spacial overhead (memory) for having the same information replicated in each host is also low, due to the fact that only the physical addresses of the agents are distributed (few bytes of memory).
The information from the LAT is not replicated. Some information stored in the LAT regarding a specific agent is only needed by the \textit{ams} agent of the same host (for instance, the process PID). Therefore, this information does not need to be replicated. Some other information could be useful for the agents but is not usually requested (such as the life cycle state). In order to reduce the overhead resulting from replication, we divide the information regarding agents into two tables. Each \textit{ams} stores in their LAT the information regarding the agents under its management, that is, the agents that are running on the same host. If some information available to agents is needed (such as the life cycle state), the agent
has to make a request to the AMS service using the AMS service ontology. In a transparent way, these requests addressed to the AMS service are delivered to the specific *ams* running on the same host as the agent requested.
### 4.2. Directory Facilitator
The Directory Facilitator (*DF*) service offers the yellow pages functionality defined by FIPA. This service stores the information regarding the services offered by agents. The *DF* service allows agents to register the services they provide, deregister these services, and look up a specifically required service. Much like the AMS service, the *DF* service is implemented in a distributed scenario by means of agents running on each MAP host, called *df* agents. Information regarding services is also replicated in every host of the MAP.
Information that needs to be replicated is stored in a unique table called GST (Global Service Table). This table is a list of pairs: services offered by agents of the MAP and the agent that offers this service. In contrast to the GAT, the GST is not implemented as shared memory, therefore only the *df* can access this information directly.
Agents are able to register, deregister, and look up services offered by other agents. To do these tasks, agents need to communicate with the *DF* service using the *DF* service ontology. Current functionality of the *DF* service is the one proposed by FIPA. Nevertheless, we consider the possibility of improving this service in order to provide new functionalities such as the management of semantic information, service composition, services offered by agent organizations, etc. and also extending the operations proposed by FIPA for registering, deregistering, and searching for services.
### 4.3. Organizational Unit Manager
The Organizational Units Manager (*OUM*) service provides support oriented to agent-group communication as a pre-support for agent organizations. Several research groups define theoretical proposals and methodologies to design MASs, oriented to organizational aspects of the agent society [27]. In order to develop applications which use these organization oriented methodologies, we require MAPs that support them. Among there are few MAPs which offer any kind of support related to agent organizations. Among these MAPs are Jack [3], MadKit [4], or Zeus.
An agent group in Magentix is called organizational *unit* (from now on, *unit*) and can be seen as a blackbox from the point of view of external agents. Units can also be composed of nested units. Agents can interact with an agent unit in a transparent way, i.e., from the point of view of an agent outside the unit, there is no difference between interacting with a unit or with an individual agent. Interaction between an agent and a unit is carried out by the MAP through properties specified by the user. Each unit has some properties associated to it. As each agent of the MAP has a unique name, each unit is identified in the MAP by its *name*. In order to interact with any unit, user must specify one or
more agents to receive the messages addressed to the unit: these agents are called *contact agents*. User can also specify the way in which these messages have to be delivered to the *contact agents*. This property is called the *routing type* and messages addressed to the unit will be delivered to the contact agents defined according to one of these *routing types*:
- **Unicast**: The messages addressed to the unit are delivered to a single agent which is responsible for receiving messages. This type is useful when we want a single message entrance to the group. It could be useful if the group has for example, a hierarchical structure, where the supervisor receives every message and distributes them to its subordinates.
- **Multicast**: Several agents can be appointed to receive messages. When a message is addressed to the unit, this message is delivered to any contact agent in the unit. This could be useful if we want to represent an anarchic scenario, where every message needs to be known by every agent without any kind of filter.
- **Round Robin**: There can be several agents appointed to receive messages. But each message addressed to the unit is delivered to a different contact agent, defined according to a circular policy. This type of routing messages is useful when several agents offer the same service but we want to distribute the incoming requests to avoid the bottlenecks.
- **Random**: Several agents can be defined as contact agents. But the message is delivered to a single one, according to a random policy. As with the previous type, this is useful for distributing the incoming requests, but no kind of order for attending these requests is specified.
- **Sourcehash**: Several agents can be the contact agents. But any given message is delivered to one of the agents responsible for receiving messages, according to the host where the message sender is situated. This is a load-balancing technique.
Units have a defined set of agents which make up the unit, called *members*. These agents can interact and coordinate with each other and each one plays a certain *role*. Finally, each unit has a *manager* associated to it. This agent is responsible for adding, deleting or modifying the members and contact agents. By default it is the agent which creates the unit and is the only one allowed to delete it.
All of this information regarding units in Magentix, is managed by the *OUM* service, which stores it in the *GUT* (Global Unit Table). Similar to the previous services, *OUM* is a distributed service composed by *oum* agents running on each MAP host. The *GUT* table is replicated and synchronized on each host of the MAP every time an update is made. Interaction between agents and *OUM* service is carried out by the sending of messages using the *OUM* service ontology.
### 4.4. RDF as framework for representing information
To develop large systems, standard language communication is a key requirement for allowing the interaction between heterogeneous entities. FIPA pro-
poses some Agent Communication specifications regarding the language used for message exchanging in a MAP [30]. They standardize the structure of an Agent Communication Language (ACL) message to ensure interoperability and also Content Language (CL) specifications for representing the content of the ACL messages. The use of standard specifications is vital in order to allow interoperability between heterogeneous agents which could compose an open system, as well as to define standard ontologies for accessing the MAP services.
Resource Description Framework (RDF) is a language for representing information about resources on the World Wide Web. By generalizing the concept of a “Web resource”, RDF can also be used to represent information about things even when they cannot be directly retrieved from the Web [6]. RDF is based on the idea of identifying things using Web identifiers (called Uniform Resource Identifiers, or URIs), and describing resources in terms of simple properties and property values. The underlying structure of any expression in RDF is a collection of triples, each consisting of a subject, a predicate and an object. The subject can be any resource, the predicate is a named property of the subject and the object denotes the value of this property. A set of such triples is called an RDF graph. RDF also provides an XML-based syntax (called RDF/XML [7]) for recording and exchanging these graphs. RDF is intended for situations in which this information needs to be processed by applications, rather than only being displayed to people. RDF provides a common framework for expressing this information so it can be exchanged between applications without loss of meaning.
Due to the features of RDF and its widespread use in MAS [18, 24, 35, 36], an RDF-based framework for managing information has been designed for Magentix and has been integrated into it. It allows a Magentix agent to manage all of its information as RDF models (RDF graphs). Moreover, Magentix itself uses the framework for the messages exchanged, for representing the information that Magentix services manage, for interacting with the Magentix services and for storing MAP events.
The framework is based on offering an API to deal with RDF management. Of course, we did not implement an RDF support from scratch, the framework is designed as a wrapper for existing RDF management libraries and is aimed at simplifying the use of RDF inside a Magentix agent. There are some libraries that deal with RDF models. However, because of the Magentix features, i. e., the fact that it is implemented in C language and is focused on achieving high levels of efficiency, we have chosen the Redland libraries [8].
Redland is a set of free software C libraries that provide support for RDF. The authors of Redland claim that it is portable, fast and with no known memory leaks. It allows the manipulation of the RDF graph, triples, URIs and Literals. It can be implemented efficiently in C, providing memory storage with many databases (Berkeley DB, MySQL, etc.). We use the RDF/XML syntax to serialize the RDF graphs, but Redland also support other syntaxes, such as N-Triples.
or Turtle Terse RDF Triple Language. Queries can be carried out with SPARQL or RDQL.
One of the functionalities of Magentix where the RDF has been used in it is to represent messages. Agents and services use message sending to communicate with each other as we said in section 3.1. FIPA defines the structure of an Agent Communication Language (ACL) and also defines the use of RDF to represent the message content [31]. Message header and message content in Magentix are represented as RDF models serialized as XML. Some MAPs use this kind of serialization to represent the message content only (such as Jade), just as FIPA proposes. We provide Magentix with RDF to represent the whole message. Therefore, only one parser is needed and this simplifies the parsing and serializing process of a message.
As far as we are concerned, representing the FIPA-ACL using RDF should be standard, but currently it is not, so interoperability with other FIPA-compliant MAPs is compromised. A simple gateway that directly translates both representations can be added to solve this problem. Figure 2 shows an example of a Magentix message. It is an RDF graph in which resources are drawn as ellipses and literals are drawn as squares. As can be observed, all of the FIPA-ACL fields are mapped as RDF properties describing a message resource. The content of the message can also be seen as an RDF sub-graph inside the main RDF graph representing the message. Therefore, any information that a Magentix agent has as an RDF graph, can be added or retrieved directly from a message.
Regarding the representation of information about Magentix services, an ontology for interacting with them has been defined using Web Ontology Language (OWL) [5]. The ontology mainly focuses on describing the resources that the services manage (hosts, agents, services, organizational units, etc.). Therefore, all of the information is treated, without taking into account implementation concerns, so that a change in the implementation does not have any effect on the way the services treat the information. What is more, they can store all of its information in a direct and simpler fashion on a database.
In order to achieve rich and flexible interactions between agents and Magentix services, the ontology also includes actions that can be requested by a Magentix service (creation of a new agent to the AMS, registering a service to the DF, creation of a new organizational unit to the OUM, etc.). Therefore, any Magentix agent that knows the ontology can interact with Magentix services and also manage all of the related knowledge using the framework provided.
4.5. Security Model
The Magentix MAP has a security model [48, 47], which is based on both the Kerberos protocol and the Linux OS access control. This model provides Magentix with authentication, integrity and confidentiality. By means of this model each agent has an identity which it can prove to the rest of the agents and services in a running Magentix MAP.
Magentix agents can have three identity types:
A Scalable Multiagent Platform for Large Systems
Fig. 2. Magentix Message represented in RDF
- **Agent** identity. Its identity as an agent. This identity is created by the AMS when the agent is created.
- **User** identity. The identity of its owner, i.e., the identity of the user that created the agent.
- **Unit** identity. The identity of each unit that the agent is in.
An agent always has at least its **Agent** identity and its owner’s **User** identity. Therefore, a Magentix agent is provided with more than one identity, so a way of letting the Magentix communication module know which Kerberos credentials it has to use when sending a message is needed. This is done with a new field in the message header. If this field is in the message header of a message to be sent, the communication module tries to use the identity chosen; otherwise the corresponding agent identity is used. If the Kerberos credentials associated to the identity that the agent is requesting are not available, and the agent is trying to use an identity that it does not own for instance, the sending of the message fails.
Magentix services are based on information replication in each host. In order to check the integrity of this information and protect it from being accessible to non-authorized users, service communication needs to be secured. In order to do so, the administrator creates a **principal** (the **principal** is the unique name of a user or service allowed to authenticate using Kerberos) for each service with a random key that is saved by default in `/etc/krb5.keytab`. This file is secured...
using Linux OS access control and can only be accessed by the root user, so Magentix services have to run as root privileges.
When a service requires communication with another service, a security context is established as a client with the principal of the MAP administrator and as a server with the principal of the destination service. Using this security context the information sent is encrypted and a message integrity code is calculated. Therefore, the client is sure that the destination service is the service expected. Moreover, the destination service knows that it is being contacted by a service with the administrator’s identity, so the destination service will serve all of the requests it receives. Thus, only MAP services can exchange information with each other.
Securing agent communication is similar to securing service communication, but agents use the identity that the ams agent has created for them when creating a security context to allow a secure interaction with each other.
In order to make efficient use of security contexts, a context cache has been added to each agent. This cache contains the corresponding security context associated with a destination agent. This cache is not related to the connections cache, so that, when a connection with an agent is closed, the associated security context is not lost.
5. User Agents
Agents in Magentix are represented as Linux processes. Internally, every agent is composed of Linux threads: a single thread for executing the agent tasks (main thread), a thread for sending messages (sender thread) and a thread for receiving messages (receiver thread). The ams agent manages the creation and deletion of the user agents launched on the same host. The GAT is shared between the ams agent and these user agents, so accessing the physical addresses of any agent of the MAP is fast and does not become a bottleneck. Agents have free read access to the GAT, thus, searching for the address of any agent registered in the MAP is efficient.
Magentix provides a template for developing agents written in C++. We provide different methods to manage the agent execution life cycle as well as the message sending and reception. Furthermore, agent developers can extend this model to include other requirements. Interaction with services is easily carried out by means of a specific API. Interactions among agents are focused on conversations. An agent can be interacting with several agents or services at any time. Each interaction between two agents can be represented as a pattern of communication where some messages are exchanged between the participants. These patterns can be predefined or not, but there is an initial message and a final message. The entire amount of messages exchanged between two participants represents a conversation. Magentix provides two functionalities for managing conversations: mailboxes and conversation managers.
5.1. Mailboxes
Mailboxes are used to improve the management of incoming messages from any agent. An agent is able to interact simultaneously with several agents. In these scenarios the possibility of distributing the incoming messages in different message queues, depending on the conversation that belongs each message, becomes interesting. By default every agent has a unique Mailbox called DEFAULT_MAILBOX, which receives every message addressed to the agent.
Magentix allows agent developer to create new Mailboxes and later, associate a conversation identifier to them. Then, when a message with this conversation identifier (represented as the conversation_id field of the message) is received, this message is routed to the corresponding Mailbox. This functionality allows messages to be filtered and split according to this field, so that, agent developers can easily distribute the different conversations which an agent is involved in among different Mailboxes. A Mailbox is not restricted to receive only the messages of a specific conversation identifier since we consider the possibility of associating several identifiers to the same Mailbox. The basic functionality an agent developer needs to bear in mind when working with Mailboxes is creating new Mailboxes and then, associating them to conversation identifiers. When an agent checks the message incoming queue, it specifies which Mailbox it wants to check. We can see in figure 3 an image of the internal structure of a Magentix agent.
Fig. 3. Magentix Agent
5.2. Conversation Manager
Interactions between Magentix agents are focused on conversations. Thus, it is important for us not only to the searching and sending of messages to other agents but also to easily reproduce typical conversation patterns that can appear in a big variety of scenarios. An agent is able to simultaneously communicate with several agents. Every interaction between a pair of agents very often requires the exchanging of more than one message. Moreover, message exchange patterns are usually repeated in several interactions between agents, i.e. to access some service, to request information, to send proposals to different agents, etc. Thus, defining communication patterns to specify which messages exchanges are allowed for a specific interaction proves to be an interesting and useful feature for agent developers.
FIPA defines standard interaction protocol specifications that agents can use in their conversation with other agents ([32]). These specifications deal with pre-agreed message exchange protocols for ACL messages. Magentix provides support for executing these protocols defined by FIPA, therefore, agent developers can easily reproduce these interaction scenarios without needing to consider the sequence of exchanged messages, the possible failures in the execution of the protocol and so on. Agent developer only has to specify what to do when some of the deterministic events of the protocol take place and the protocol will automatically be checked and executed by Magentix.
Interaction protocols are defined by FIPA using UML-diagrams. In figure 4 we can see the protocol FIPA-request as an example. In these protocols there are two roles, initiator and participant, which exchange some possible message sequences. We translate this representation to Magentix as finite state machines. Each interaction protocol has a finite state machine associated to each possible role of the protocol. In figure 5 we can see the FIPA-request protocol for the initiator role. Each finite state machine has these properties:
- A not create initial state. This state is the first of every protocol.
- Transitions which allow the execution of the protocol depending on the messages received (represented as performatives such as refuse or agree) or \( \lambda \)-transitions, which take the protocol execution forward to the next state.
- Intermediate states for representing the intermediate steps of the protocol execution.
- A delete state. This is the last one of every protocol.
In order to process these interaction protocols we define a conversation manager. A conversation manager is an internal entity within Magentix agents, which has one or more interaction protocols associated to it. When an agent is using one of these protocols in its conversations with other agents, its conversation manager is in charge of automatically managing it and ensuring the correct execution of the protocol, executing each step and transition of the protocol. Several conversation managers can be assigned to a single agent, each one in charge of the management of different interaction protocols. This decision depends on the agent developer, which can run more conversation managers or
stop them according to its needs. The conversation manager is an abstraction that hides the basic concepts of the conversations (makes sure the message is exchanged, mailbox management, etc.) from the agent developer, which only has to specify what to do in each step of the protocol, easily allowing the concurrent execution and management of several conversations. We are now working to extend the conversation management functionalities. We especially want to facilitate the specification of any protocol interaction that agent developer could require, apart from that predefined by FIPA.
6. The Tourism Service Application
In this section, we present a real application developed in Magentix which uses some of the features provided. In order to test the performance of a MAP focused on large systems, we require examples aimed to be large-scale which are so real as possible. The Tourism Service application [39] is a MAS that allows users to find information about places of interest in a city according to their preferences (restaurants, movie theaters, museums, theaters, and other places of general interest such as monuments, churches, beaches, parks, etc.), by using their mobile phone or PDA. Once a specific place has been selected, the tourist can make a reservation at a restaurant, buy tickets for a film, etc. Our
research group has been working with a partnership developing MAS-based recommender systems for tourists.
There are four different agent types in the application. A SightAgent manages all of the information related to the features and activities for a specific place of interest in the city. A UserAgent allows tourists to interact with the system by means of a GUI on their mobile devices. A BrokerAgent mediates between UserAgents and SightAgents. It also manages updated information about the SightAgents registered on it. Finally, a PlanAgent manages all of the planning processes in the system. The application offers search, reservation, planning, and registration services. The Search service is offered by the BrokerAgent and can be requested by a UserAgent. The result of the invocation of this service is a list of descriptions of places that match user preferences. The Reserve service is offered by a SightAgent and can be requested by a UserAgent. The result of this service is the confirmation of a successful reservation or an error message. The “Plan a Specific Day” service is provided by the PlanAgent and can be requested by a UserAgent. The result of this service is a plan consisting of a list of places or activities.
We have implemented this application using the Magentix MAP with RDF support. The implemented ontology is represented in RDF and gives detailed descriptions of tourist places, information about scheduling, etc; For example, information about restaurants, represent issues related to menus, cuisine, ingredients, etc.
UserAgents can be implemented as Magentix agents in the MAP or by means of an interface that is implemented using the J2ME (Java 2 Micro Edition) specification. In the latter case, UserAgents have to make HTTP requests to a GatewayAgent, which acts as a gateway between UserAgents and the rest of the system. This GatewayAgent is implemented as a Magentix agent, which includes a micro-http server. This mechanism allows the interaction between Magentix agents and external agents.
7. Large Scale Evaluation of the Messaging Service
In this section, we present different experiments in order to evaluate the messaging service of Magentix, based on the application presented in Section 6. As we stated in Section 2, this service is crucial when developing systems with large agent populations with high message traffic. In [16], we presented a testbed for MAP performance evaluation. These tests focused on evaluating different parameters of the MAP in one and two hosts: the message traffic, the message size, the registered services, the searched services, the CPU consumption of the threads, the memory consumption, the network traffic, etc. According to these tests, the main bottleneck of a MAP performance is related to the messaging service. These conclusions have also been confirmed by other authors, who claim that other parameters such as the CPU cycles do not reach saturations in large-scale environments [28]. Based on these conclusions, in [49] we presented a set of large-scale benchmarks to test the messaging ser-
vice. The experiments shown here are based on these benchmarks and are adapted to the Tourism Service Application presented in Section 6.
We compare Magentix against the performance of Jade, which is a well-known MAP and is more scalable than other MAPs as we stated in Section 2. Since the initial implementation of the Tourism Service Application was in Jade [38], we can determine the performance of the messaging service of both MAPs simulating different scenarios in this domain. We used 20 PCs Intel(R) Core(TM) 2 Duo CPU @ 2.60GHz, 2GB RAM, Ubuntu 10.10 and Linux Kernel 2.6.35. The computers were connected to each other via a 100Mb Ethernet hub.
The first experiment is aimed at testing the MAPs performance when both the number of agents and the message traffic increase. This experiment measures the capability of the MAP when messages are sent to different agents. As an example, this situation can occur when a BrokerAgent requests different SightAgents. We simulate this scenario by launching several groups of BrokerAgents and SightAgents. The objective of each BrokerAgent is to send a message to the first SightAgent on its list, which sends back the same message. After that, each BrokerAgent sends a message to its corresponding SightAgent placed in the next host and waits for the response. This experiment measures the time elapsed between when the first message is sent by the first BrokerAgent and when the last message is received by the last BrokerAgent. The experiment started with 100 agents in the system, increasing to 1000. The number of messages sent by each BrokerAgent was specified at 1000.

**Fig. 6.** Experiment 1: population and traffic increase
Figure 6 shows the time required for the two MAPs. The figure shows that there is a performance degradation as the number of agents and the message traffic increase. However, Magentix performance degrades less than Jade performance. As an example, it can be observed that the elapsed time in Magentix when the system is composed of 1000 agents is less than the elapsed time in Jade when the system is composed of 200 agents.
Another typical scenario is the massive amount of message-sending to a specific agent. The second experiment measures the ability of the MAPs when a lot of agents send messages to a single one. This specific agent could become a bottleneck in the system when multiple messages are addressed to it. This scenario appears, for example, when UserAgents are requesting the same BrokerAgent to retrieve information. The BrokerAgent has to serve every received request. As the number of incoming requests increases, the time for processing these requests may also increase. In order to simulate this, a single BrokerAgent agent and several UserAgents were launched. The goal of each UserAgent was to send messages to the BrokerAgent. The elapsed time between when the BrokerAgent received the first message and when it answered all the messages is shown in Figure 7. In this experiment, we increased the number of UserAgents up to 100, distributed among all the hosts. Each UserAgent sent 10000 messages.

**Fig. 7.** Experiment 2: massive sending to an agent
It can be observed that the elapsed time increases in both MAPs as the number of requests increases. However, as in the first experiment, the performance degradation is less in Magentix. The time difference between the two
MAPs gradually increases as the number of agents increases. Therefore, Magentix is also more scalable and efficient than Jade in this scenario. Note that in this scenario the receiver agent is not changed during the entire experiment.
The third experiment complements the second one. The distribution of agents in this experiment was similar. However, there were the same number of BrokerAgents as UserAgents. In this experiment, several BrokerAgents were placed in the same host and each UserAgent communicated with its corresponding BrokerAgent. The results obtained are shown in Figure 8. It can be observed that the results for Jade are similar to the results for the second experiment. This is due to the way that Jade implements communication among all the MAP hosts. Therefore, the bottleneck is caused by the message transport system and not by the way the message queue is managed by the agent itself. In contrast, the performance in Magentix in the third experiment is slightly better than in the second one.
**Fig. 8.** Experiment 3: host massive sending
The fourth experiment checks the limits of the MAPs. This experiment provides a different perspective from the previous experiments in which the receiver agents are predefined. This may give rise to different bottlenecks, showing another typical scenario in real systems, in which some agents may be more requested than others. In order to simulate this, several BrokerAgents were placed in 10 hosts of the MAP and several UserAgents were placed in the other 10 hosts. Each UserAgent had to send 1000 messages to a non-predefined BrokerAgent. Thus, the specific BrokerAgent was randomly selected before sending each message. This caused some BrokerAgents to be more
overloaded than others. Furthermore, in this experiment, the number of agents was increased to 2000, in order to overload the MAPs.

**Fig. 9.** Experiment 4: random requests
It can be observed in Figure 9 that Magentix offers better performance than Jade, and the differences increase according to the increase in the traffic. The figure also shows that the two MAPs present higher response times with respect to the first experiment, in which the traffic was equally distributed among all the BrokerAgents. This is due to the fact that, in this forth experiment, message load is not spread over all of the receiver agents launched. Since the BrokerAgent in each message sending is selected randomly, there may be BrokerAgents that have to serve a lot of messages while others are idle. Therefore, as the second experiment indicates, Jade performs quite badly when there is an agent that is receiving a lot of messages. As a result, performance differences with respect to the first experiment are much higher in Jade than in Magentix.
From the results provided in these tests, we can conclude that Magentix improves the efficiency and scalability of the messaging service provided by Jade, which is the most commonly used MAP and that it is more scalable than other MAPs. In these tests, we have simulated four typical scenarios in order to determine the efficiency and scalability in the Magentix and the Jade MAPs. These tests represent critical situations so that we can see the degree of performance improvement achieved more clearly. Although we scale up to 20 hosts in these tests, the conclusions obtained can be extended to at least to 100 hosts according to the results shown in [49].
8. Conclusions
The next generation of technologies aims to provide features such as distribution, interoperability, scalability, organizations, service-oriented, open, geographically dispersed, and so on. MASs can contribute to these environments by evolving new applications that will become more autonomous and social from the point of view of the MAS field.
MAPs have traditionally been used as a support framework to facilitate the development of these kinds of systems. A lot of MAPs have been developed in the last few years; however, unfortunately, very few real MAS-based applications have appeared, probably due to the lack of suitability of the support frameworks which did not fulfill all of the requirements. In order to support the new generation of systems (in line with the latest trends in rapidly expanding technologies), new MAP designs should focus on being interoperable, scalable, and large-scale as just some of their key features.
In this paper, we have presented the Magentix MAP. Since its design is closer to the OS level, it ensures that the MAP is efficient, especially when running large systems. Basic services such as an agent directory service, a service directory service, and a messaging service are provided by Magentix. We have implemented and tested the performance of this MAP. Magentix also provides a group-oriented communication mechanism. This mechanism allows communication between individual agents as well as interaction among groups of agents. When considering large systems, security concerns become an important issue and a necessary feature when these systems become open. Magentix has a security model that is based on the Kerberos protocol and Linux OS access control which provides authentication, integrity, and confidentiality. In order to achieve interoperable systems, we represented the information using RDF. This framework has been widely used in MAS for different purposes. Magentix represents messages to be exchanged in RDF so that agents can easily manage the information that is sent and received. Ontologies defined in OWL have also used to interact with services.
Using a tourism service application, we have shown how Magentix can be used as a support framework to develop MAS-based applications. The messaging service evaluation shown in this paper demonstrates that a MAP design that uses the OS services provides greater efficiency and scalability than other high-performance middleware-based MAPs such as Jade.
With the features provided by Magentix we can establish the next objective of the project: to provide Magentix with support for open MAS. We are working on the development of an http-based gateway at MAP level, in order to allow the interaction between Magentix agents and agents developed in other MAPs. Virtual organizations where agents dynamically enter and exit the system and form groups could also be created in Magentix.
Acknowledgments. This work has been partially supported by CONSOLIDER-INGENIO 2010 under grant CSD2007-00022, and projects TIN2011-27652-C03-01 and TIN2008-
Juan M. Alberola et al.
04446. Juan M. Alberola has received a grant from Ministerio de Ciencia e Innovación de España (AP2007-00289).
References
5. OWL Web Ontology Language Overview. http://www.w3.org/TR/owl-features/
6. RDF. http://www.w3.org/TR/rdf-primer/
10. Standard for information technology - portable operating system interface (POSIX)
A Scalable Multiagent Platform for Large Systems
Juan M. Alberola et al.
Juan M. Alberola is a PhD student at the Departament de Sistemes Informàtics i Computació of the Universitat Politècnica de València. His interest areas include agent organizations, adaptation, multiagent platforms, case-based-reasoning and electronic markets.
Jose M. Such is Lecturer in the School of Computing and Communications at Lancaster University (UK). He was previously research fellow at Universitat Politècnica de València (Spain), by which he was awarded a PhD in Computer Science in 2011. He is mostly interested in the following research topics: Privacy, Security, Trust, Reputation, Multi-agent Systems, and Artificial Intelligence.
Vicent Botti is Full Professor at the Universitat Politècnica de València (Spain) and head of the GTI-IA research group of the Departament de Sistemes Informàtics i Computació. He received his Ph.D. in Computer Science from the same university in 1990. His research interests are multi-agent systems, agreement technologies, and artificial intelligence, where he has more than 200 refereed publications in international journals and conferences. Currently he is Vice-rector of the Universitat Politècnica de València.
Agustín Espinosa is Lecturer at the Departament de Sistemes Informàtics i Computació at the Universitat Politècnica de València and a researcher at the GTI-IA Research Group of the Universitat Politècnica de València. His research interests include multiagent systems, agent architectures, agent platforms, agent frameworks, and real-time agents. He received his Ph.D. in Computer Science from the Universitat Politècnica de València, Spain in 2003.
Ana García-Fornes is a Professor at the Departament de Sistemes Informàtics i Computació of the Universitat Politècnica de València. Her interest areas include: real-time artificial intelligence, real-time systems, development of multi-agent infrastructures, tracing systems, operating systems based on agents, agent organizations, and negotiation strategies.
Received: October 29, 2011; Accepted: October 8, 2012.
|
{"Source-Url": "http://www.doiserbia.nb.rs/img/doi/1820-0214/2013/1820-02141200039A.pdf", "len_cl100k_base": 12342, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 58234, "total-output-tokens": 15832, "length": "2e13", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.0006422996520996094, "__label__crime_law": 0.00043582916259765625, "__label__education_jobs": 0.0033016204833984375, "__label__entertainment": 0.0001481771469116211, "__label__fashion_beauty": 0.00020635128021240232, "__label__finance_business": 0.0005688667297363281, "__label__food_dining": 0.0004091262817382813, "__label__games": 0.0008983612060546875, "__label__hardware": 0.0013952255249023438, "__label__health": 0.0006146430969238281, "__label__history": 0.0007138252258300781, "__label__home_hobbies": 0.000133514404296875, "__label__industrial": 0.0005364418029785156, "__label__literature": 0.0004706382751464844, "__label__politics": 0.00041365623474121094, "__label__religion": 0.0004723072052001953, "__label__science_tech": 0.22216796875, "__label__social_life": 0.00015532970428466797, "__label__software": 0.027587890625, "__label__software_dev": 0.73681640625, "__label__sports_fitness": 0.0002371072769165039, "__label__transportation": 0.0008015632629394531, "__label__travel": 0.0004105567932128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69281, 0.02657]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69281, 0.30401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69281, 0.91977]], "google_gemma-3-12b-it_contains_pii": [[0, 2514, false], [2514, 5698, null], [5698, 9043, null], [9043, 12258, null], [12258, 15631, null], [15631, 17507, null], [17507, 20217, null], [20217, 23728, null], [23728, 26775, null], [26775, 29811, null], [29811, 32987, null], [32987, 36037, null], [36037, 37643, null], [37643, 40565, null], [40565, 42097, null], [42097, 45313, null], [45313, 46646, null], [46646, 49739, null], [49739, 51442, null], [51442, 53220, null], [53220, 54955, null], [54955, 56726, null], [56726, 59800, null], [59800, 62787, null], [62787, 66145, null], [66145, 68866, null], [68866, 69281, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2514, true], [2514, 5698, null], [5698, 9043, null], [9043, 12258, null], [12258, 15631, null], [15631, 17507, null], [17507, 20217, null], [20217, 23728, null], [23728, 26775, null], [26775, 29811, null], [29811, 32987, null], [32987, 36037, null], [36037, 37643, null], [37643, 40565, null], [40565, 42097, null], [42097, 45313, null], [45313, 46646, null], [46646, 49739, null], [49739, 51442, null], [51442, 53220, null], [53220, 54955, null], [54955, 56726, null], [56726, 59800, null], [59800, 62787, null], [62787, 66145, null], [66145, 68866, null], [68866, 69281, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69281, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69281, null]], "pdf_page_numbers": [[0, 2514, 1], [2514, 5698, 2], [5698, 9043, 3], [9043, 12258, 4], [12258, 15631, 5], [15631, 17507, 6], [17507, 20217, 7], [20217, 23728, 8], [23728, 26775, 9], [26775, 29811, 10], [29811, 32987, 11], [32987, 36037, 12], [36037, 37643, 13], [37643, 40565, 14], [40565, 42097, 15], [42097, 45313, 16], [45313, 46646, 17], [46646, 49739, 18], [49739, 51442, 19], [51442, 53220, 20], [53220, 54955, 21], [54955, 56726, 22], [56726, 59800, 23], [59800, 62787, 24], [62787, 66145, 25], [66145, 68866, 26], [68866, 69281, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69281, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
45d2ed88011b1377e5974e130bda6c73b73c89b7
|
Automatic Fault Location for Data Structures
Vineet Singh
University of California, Riverside, USA
vsing004@cs.ucr.edu
Rajiv Gupta
University of California, Riverside, USA
gupta@cs.ucr.edu
Iulian Neamtiu
New Jersey Institute of Technology, USA
ineamtiu@njit.edu
Abstract
Specification-based data structure verification is a powerful debugging technique. In this work we combine specification-based data structure verification with automatic detection of faulty program statements that corrupt data structures. The user specifies the consistency constraints for dynamic data structures as relationships among the nodes of a memory graph. Our system detects constraint violations to identify corrupted data structures during program execution and then automatically locates faulty code responsible for data structure corruption. Our approach offers two main advantages: (1) a highly precise automatic fault location method, and (2) a simple specification language. We employ incremental constraint checking for time efficient constraint matching and fault location. On average, while Tarantula [18] statistical debugging technique narrows the fault to 10 statements, our technique narrows it to \( \approx 4 \) statements.
Categories and Subject Descriptors D.2.4 [Software Engineering]: Software/Program Verification—Reliability, Validation; D.2.5 [Software Engineering]: Testing and Debugging—Tracing; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs—Specification techniques
General Terms Languages, Debugging, Verification
Keywords data structure error, memory graph, constraint checks, fault location
1. Introduction
Faulty code often leads to data structure corruption. Heap-allocated data structures can be easily modeled using Memory Graphs [30] - heap allocations and pointers between heap elements correspond to nodes and edges. The structural definition of a data structure can be expressed via consistency constraints that describe the allowed relationships among the nodes of the memory graph. The data structure errors that violate these constraints can be detected and located by evaluating the constraints.
As an example, consider the widely-used memory allocator in the GNU C library (glibc) that maintains a doubly-linked list to track free memory chunks. The structural consistency constraints of this list are often violated by bugs in client programs (e.g., heap overflow bugs) leading to a program crash. Examples of bugs in popular software that do exactly the above include: Kate (KDE bug #124496), Kdelibs (KDE#111176), Kooka (KDE#111169), Open office (Open office#77015), GStreamer (GNOME#343652), Doxygen (GNOME#625051), Rhythmbox (GNOME#636322), Evolution (GNOME#338994, GNOME#579145) [1–3].
Motivated by the above observations, we have built a system that (1) allows the user to specify the consistency constraints for dynamic data structures as relationship rules among the nodes of a memory graph, (2) automatically detects any violation of these rules at runtime, and (3) locates the faulty code. Designing such a system is challenging because data structure invariants are routinely broken, albeit temporarily, during operations on the data structure. For example, the structural invariant for a doubly-linked list, if element \( e \) points to element \( e' \) then \( e' \) should point back to \( e \), is violated temporarily during the insertion of a new node in the list. The presence of temporary constraint violations makes it crucial for the fault location system to be able to differentiate between a constraint violation caused by a fault and a temporary legitimate violation. Moreover, the fault location system must deliver ease of use and runtime efficiency.
Our system provides support for specification-based fault location via two main constructs. First, a simple yet effective specification language for writing consistency constraints for data structures. Second, a directive for specifying \textbf{C-points}, i.e., program points where our system will check at runtime whether the constraints are satisfied; at such points, the data structure is supposed to be in a consistent state with respect to the provided specification. \textbf{C-points} are akin to transactions hence emerge naturally, e.g., at the beginning and end of functions that modify the data structure. \textbf{C-points} allow us to detect data structure corruption early, before it gets a chance to turn into further state corruption or crash. In addition, if the program crashes then the crash point is used as a \textbf{C-point}. Our technique can work even with a single \textbf{C-point}, while more \textbf{C-points} imply greater precision.
As the program executes, our system traces the evolution history of the data structures. Once a constraint violation is detected, it identifies the corrupted data structures and the set of inconsistencies. It traces back through the evolution history, searching for program points where inconsistencies were introduced and collecting a list of faulty program statements. We use optimizations based on temporal and spatial knowledge of the data structures to narrow down the search space, making the trace back efficient. We also use a compact memory graph representation and employ an incremental constraint matching algorithm to make the technique space and time efficient. Although specification-driven detection of data structure errors has been proposed before [12], our approach offers two main advantages:
1. \textbf{Highly Precise Fault Location}: our system offers more than just error detection, as it locates the execution point and program statement that caused a rule violation. Prior work in handling data structure errors has been limited to detecting the erroneous program state—after detection the user must find the fault manually. For example, Archie [14] localizes the error to the region of the ex-
execution between the first call to the constraint checker that detects an inconsistency and the immediately preceding call. This compels the user to insert frequent time-consuming consistency checks and locate the error manually. When compared to dynamic slicing and the Tarantula statistical debugging technique, our approach is much more effective.
2. Simple Yet Expressive Specification: the design of our specification language makes it easier for the user to specify the rules. Prior works have proposed specification languages that are rich but complex. For example, in the approach by Demsky and Richard [12], specifying a doubly-linked list takes 14 lines of specification code whereas in our approach it takes just 4 lines. Moreover, our language is powerful so it allows specifying data structures such as AVL trees, red-black trees, that other languages do not.
In addition, we employ a space efficient unified memory graph representation [34] from which the memory graph at any prior execution point can be extracted. It also allows constraint checks to be performed incrementally.
Our implementation uses the Pin dynamic instrumentation framework [25] for instrumenting Linux executables for the IA-32 architecture. We have evaluated the efficiency and effectiveness of our techniques; we now highlight the results. Our specification language allows constraints for widely-used data structures to be defined using just 2–9 lines of specification code. Experiments show that our approach narrows down the fault to 1–10 statements, which is much smaller than results of dynamic slicing and tarantula statistical technique. Fault location is also performed efficiently: following a substantial execution, constraints are checked in typically a second or less, and faults are localized in less than 2 minutes. Finally, our incremental checking optimizations substantially reduce the time and memory cost associated with checking. This allows users to perform more frequent and finer-grained checks, accelerating fault detection.
Our contributions include:
• An error detection and fault location system that, given a data structure consistency specification, automatically detects data structure errors, and upon detection, traces errors back to their source.
• A new data structure consistency constraint specification language that allows the user to easily and concisely express structural consistency properties.
• Data structure-specific trace back. Our fault location system identifies corrupt data structures out of multiple program data structures and uses this information for efficiently locating faulty code.
• An incremental method of checking consistency constraints that uses information from the previously performed checks to reduce time and memory overhead.
2. Overview of Our Approach
In this section we provide an overview of our approach and highlight its key features using an example.
The example is centered around quad trees—a widely used data structure for spatial indexing. A quad tree is a tree with internal nodes having four children and the data stored at the leaf nodes. Thus one of the key structural consistency constraints for this data structure is: for any internal element e, the number of children is four. In Figure 1 we show example code that creates and manipulates quad trees, and contains a bug which leads to a violation of the consistency constraint. The quad tree definition (lines 2–7) contains nine fields: the first five fields store data about the node, while the next four fields, child[4], point to children in the quad tree. The main function reads coordinates (x,y) from a file and populates the tree
```
1 struct pt { int x, y; }
2 struct qdtree {
3 int posX, posY;
4 int width, height;
5 struct pt * point;
6 struct qdtree * child[4];
7 }
8 struct qdtree * root;
9 int main()
10 {
11 while (fscanf(file, "%d%d", &x,&y)==EOF)
12 insert(x,y, root);
13 }
14 }
15 void insert(int x, int y, struct qdtree * root)
16 {
17 ##C–POINT(qdtree)
18 if (root == NULL) {
19 root = (struct qdtree *)malloc(sizeof(qdtree));
20 temp = create_point(x, y);
21 root->point = temp;
22 } else {
23 n = search(x, y, root);
24 if (n->pt != NULL) {
25 split (x, y, n);
26 }
27 }
28 }
29 }
30 void split (int x, int y, struct qdtree * node)
31 {
32 struct qdtree * temp;
33 struct qtree * temp;
34 for (i = 0; i < 4; i++) {
35 temp = (struct qdtree *)malloc(sizeof(qdtree));
36 set_node_fields(temp, i, node);
37 node->child[i] = temp;
38 if (x==node[i]->posX & & y==node[i]->posY) {
39 ...
40 parent_node->child[i]= NULL;
41 } else {
42 move_value(node, node->child[i]);
43 assign_value(x, y, node->child[i]);
44 }
45 }
46 }
```
Figure 1. Faulty Quad Tree implementation.
```
qdtree FIELD 9 EDGE 5;
/*# total fields and # pointer fields */
qdtree X;
X.ISROOT == FALSE ⇒ X.INDEGREE == 1;
qdtree X; qdtree Y;
X → Y ⇒ Y ≠ X;
qdtree X; qdtree Y;
X→ Y ⇒ X.OUTDEGREE == 4;
```
Figure 2. Consistency constraints of a Quad Tree.
by calling the insert function. In insert, we first search for an existing node that is suitable for coordinates (x,y). If the resulting node n already has a point stored in it, then four children of n are created, one for each quadrant. The point at node n and the newly-read point are then inserted into the quad tree rooted at n. The statement at line 40 in function split which sets an edge to NULL is faulty.
### Execution trace
<table>
<thead>
<tr>
<th>Execution trace</th>
<th>Inconsistencies</th>
<th>Memory Graph</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 temp = (struct qdtree *)malloc(sizeof(qdtree));</td>
<td>Inconsistencies@Point 1</td>
<td><img src="image1" alt="Diagram" /></td>
</tr>
<tr>
<td>2 set_node_fields(temp, 1, node);</td>
<td>S₁ = {</td>
<td><img src="image2" alt="Diagram" /></td>
</tr>
<tr>
<td>3 node->child[1] = temp;</td>
<td>(c₃, {n₅ → n₆}), (c₃, {n₅ → n₇})</td>
<td><img src="image3" alt="Diagram" /></td>
</tr>
<tr>
<td>4 move_value(node, node->child[1]);</td>
<td>}</td>
<td><img src="image4" alt="Diagram" /></td>
</tr>
<tr>
<td>5 assign_value(x, y, node->child[1]);</td>
<td></td>
<td><img src="image5" alt="Diagram" /></td>
</tr>
<tr>
<td>6 ++i;</td>
<td></td>
<td><img src="image6" alt="Diagram" /></td>
</tr>
<tr>
<td>* * * Execution Point 1</td>
<td></td>
<td><img src="image7" alt="Diagram" /></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Execution trace</th>
<th>Inconsistencies</th>
<th>Memory Graph</th>
</tr>
</thead>
<tbody>
<tr>
<td>7 temp = (struct qdtree *)malloc(sizeof(qdtree));</td>
<td>Inconsistencies@Point 2</td>
<td><img src="image8" alt="Diagram" /></td>
</tr>
<tr>
<td>8 set_node_fields(temp, 3, node);</td>
<td>S₂ = {</td>
<td><img src="image9" alt="Diagram" /></td>
</tr>
<tr>
<td>9 node->child[3] = temp;</td>
<td>(c₃, {n₅ → n₆}), (c₃, {n₅ → n₇}),</td>
<td><img src="image10" alt="Diagram" /></td>
</tr>
<tr>
<td>* * * Execution Point 2</td>
<td>(c₃, {n₅ → n₈})</td>
<td><img src="image11" alt="Diagram" /></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Execution trace</th>
<th>Inconsistencies</th>
<th>Memory Graph</th>
</tr>
</thead>
<tbody>
<tr>
<td>10 parent_node->child[0]= NULL;</td>
<td>Inconsistencies@Point 3</td>
<td><img src="image12" alt="Diagram" /></td>
</tr>
<tr>
<td>11 ++i;</td>
<td>S₃ =</td>
<td><img src="image13" alt="Diagram" /></td>
</tr>
<tr>
<td>* * * Execution Point 3</td>
<td>(c₃, {n₅ → n₆}), (c₃, {n₅ → n₇}),</td>
<td><img src="image14" alt="Diagram" /></td>
</tr>
<tr>
<td></td>
<td>(c₃, {n₅ → n₈})</td>
<td><img src="image15" alt="Diagram" /></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Execution trace</th>
<th>Inconsistencies</th>
<th>Memory Graph</th>
</tr>
</thead>
<tbody>
<tr>
<td>12 temp = (struct qdtree *)malloc(sizeof(qdtree));</td>
<td>Inconsistencies@Point 4 - C-POINT</td>
<td><img src="image16" alt="Diagram" /></td>
</tr>
<tr>
<td>13 set_node_fields(temp, 3, node);</td>
<td>S₄ =</td>
<td><img src="image17" alt="Diagram" /></td>
</tr>
<tr>
<td>14 node->child[3] = temp;</td>
<td>(c₃, {n₁ → n₃}), (c₃, {n₁ → n₄}),</td>
<td><img src="image18" alt="Diagram" /></td>
</tr>
<tr>
<td>15 move_value(node, node->child[3]);</td>
<td>(c₃, {n₁ → n₅})</td>
<td><img src="image19" alt="Diagram" /></td>
</tr>
<tr>
<td>16 assign_value(x, y, node->child[3]);</td>
<td>* * * Execution Point 4 - C-POINT</td>
<td><img src="image20" alt="Diagram" /></td>
</tr>
</tbody>
</table>
**Figure 3.** Memory Graph at different program points.
### Execution Point | Inconsistencies | Pending Inconsistencies | Relation |
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Execution Point 4</td>
<td></td>
<td>$S_4 = {e_1, e_2, e_3}$</td>
<td>$\emptyset$</td>
</tr>
<tr>
<td></td>
<td>$e_1 = {c_3, {n_1 \rightarrow n_3}}$</td>
<td>$P = {e_1, e_2, e_3}$</td>
<td>$FS(FaultyStatements)$</td>
</tr>
<tr>
<td></td>
<td>$e_2 = {c_3, {n_1 \rightarrow n_4}}$</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>$e_3 = {c_3, {n_1 \rightarrow n_5}}$</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Execution Point 3</td>
<td></td>
<td>$\emptyset$</td>
<td></td>
</tr>
<tr>
<td>$O_2: n_5 \rightarrow \text{child}[3] = n_9$;</td>
<td></td>
<td>$P = {e_1, e_2, e_3}$</td>
<td>${e_4, e_5, e_6} \cup {e_1, e_2, e_3} = \text{FALSE}$</td>
</tr>
<tr>
<td>Execution Point 4</td>
<td></td>
<td></td>
<td>$O_3 \rightarrow e_3 = \text{FALSE}$</td>
</tr>
<tr>
<td>$O_2: n_1 \rightarrow \text{child}[0] = \text{NULL}$;</td>
<td></td>
<td>$FS = \phi$</td>
<td></td>
</tr>
<tr>
<td>Execution Point 2</td>
<td></td>
<td>$\emptyset$</td>
<td>$O_3 \rightarrow e_3 = \text{FALSE}$</td>
</tr>
<tr>
<td>$O_2: n_1 \rightarrow \text{child}[0] = \text{NULL}$;</td>
<td></td>
<td>$P = \phi$</td>
<td></td>
</tr>
<tr>
<td>Execution Point 3</td>
<td></td>
<td>$FS = {O_2}$</td>
<td>$O_2 \rightarrow e_3 = \text{TRUE}$</td>
</tr>
</tbody>
</table>
**Figure 4.** Fault location on Figure 1.
**Specification of Consistency Constraints.** Our specification language enables the developer to express the data structure constraints directly in terms of the relationships among heap elements which makes the specifications compact and their writing intuitive. In prior approaches [12, 14], the developer must first convert their data structure definition into a high-level model and then express the constraints in terms of that model. This makes writing data structure specifications complex and error-prone.
Figure 2 specifies the consistency constraints for a quad tree in our language. The user specifies the structure of each node in the memory graph by declaring a node type (*qdtree*) that contains 9 data fields and 5 pointer fields – (FIELD 9) and (EDGE 5); constraints are specified next. The constraint specification involves declaring node variables and specifying the relationships among them. For quad tree we specify three constraints in terms of variables X and Y of node type *qdtree*. The constraint $c_1$ indicates that all nodes besides the root have an indegree of 1. The constraint $c_2$ indicates that if there is a path from node X to node Y, then there cannot be a path from node Y to node X. The constraint $c_3$ specifies that, if any node X points to another node Y (i.e., the node X is internal) then the outdegree of X is 4. We have chosen a simple constraint for the purpose of understanding, our language can handle variety of complex constraints (as explained in section 4).
**Tracing data structure evolution history.** We execute the program and trace the evolution history of the program data structure using binary level dynamic instrumentation. The instrumentation is independent of the constraints specified. We only instrument the allocation/deallocation calls and memory writes to the allocated memory. Once the program execution completes; normally or be-
because of a program crash, the traced information is used to construct memory graph.
**Fault Location.** We match the specified constraints over the memory graph at program points corresponding to C-points. The memory graph must be in a consistent state with respect to the specified constraints at these program points. Once we encounter a consistency constraint violation, the process of fault location begins. The process of fault location involves tracing back the program execution. We analyze the effect of each operation (on the memory graph) on the inconsistencies present in the memory graph. The statements corresponding to the operations contributing to the inconsistencies are added to list of potentially faulty statements.
Our system performs consistency checks on the program memory graph at the beginning and at the end of function insert (lines 17 and 28) indicated by C-points. In Figure 3, the first column shows an execution trace of the code in Figure 1, while the second and third columns contain the set of constraints violated at selected program execution points and the corresponding quad tree. Note that execution point 4 is a C-point and we find that there are multiple violations of constraint c3 from Figure 2 at this point. The set of violations is represented by the set S, and each violation is described in form of a tuple (C, G), where G is the sub-graph over which constraint C is violated. In the Figure 3 example, the constraint violations include: \( \langle c_3, \{n1 \rightarrow n3\} \rangle \), \( \langle c_3, \{n1 \rightarrow n4\} \rangle \), and \( \langle c_3, \{n1 \rightarrow n5\} \rangle \). We perform consistency checks at earlier execution points, computing the violations into set S, where i is the execution point. The fault location algorithm finds the operations contributing to inconsistencies in S by examining the S_i’s.
In our example, the fault location algorithm determines that at program point 3, the operation of setting n1 \( \rightarrow \) child[0] to NULL introduces all the inconsistencies present in the set S. The statement corresponding to the operation is output as a faulty statement. Our algorithm also determines that none of the inconsistencies in S are related to inconsistencies in S, and therefore there is no need to further search for faulty statements. Note that there are other inconsistencies present at different program points that are not related to the inconsistencies at C-points. For example, at program point 3, the inconsistency \( \langle c_3, \{n5 \rightarrow n8\} \rangle \) is temporary and ignored in the search for faulty statements.
**Optimizations.** We have optimized our fault location system in terms of both memory and time costs. For fault location, we need the memory graph at each execution point so that the constraint violations at each execution point can be detected. To avoid saving memory graphs at all execution points we employ a unified memory graph representation [34] that combines memory graphs at all execution points into one and distinguishes subgraphs via association of timestamps with graph components (nodes and edges). Thus, portions of the graph that do not change across many execution points are stored only once. Given a unified memory graph representation at program point P, the memory graph for any earlier program point can be reconstructed using the process of rollback explained in section 5). We employ an incremental algorithm which avoids redundant constraint evaluations over unchanged parts of the memory graph.
### 3. Fault Location
Our fault location algorithm is based on the observation that there are two kinds of inconsistencies in the memory graph (MG): a temporary kind of inconsistencies that are removed prior to reaching C-points, and an error kind of inconsistencies, which are present at C-points and are caused by faulty statements. The algorithm identifies the statements responsible for the error kind of inconsistencies. We first define the key concepts and then present the algorithm.
**Definition 1.** An inconsistency \( e \) at execution point i is a tuple \( \langle c_j, G_i \rangle \) where \( c_j \) is a constraint that is violated when evaluated over \( G_i \), a subgraph of the memory graph at execution point i.
**Example 1.** Consider \( \langle c_3, G_4 \rangle \) at execution point 4 in Figure 3 where \( c_3 \) is
\[
\negtree X; \negtree Y;
\]
and \( G_4 = \langle \{n1 \rightarrow n3\} \rangle \). Then \( \langle c_3, G_4 \rangle \) is an inconsistency at execution point 4 because \( c_3 \) is violated when \( X = n1 \) and \( Y = n3 \).
Note that there can be multiple inconsistencies corresponding to the same constraint violation as the constraint check can fail over multiple sub-graphs.
Consider the execution of operation O such that beforeO and afterO denote the execution points just before and after execution of O. Next we provide the conditions under which inconsistencies at beforeO and afterO are related (denoted by ‘\( \rightarrow \)’) to each other and the conditions under which O is considered to be a potentially faulty operation.
**Definition 2.** An inconsistency \( \langle c_j, G_3 \rangle \) at execution point beforeO is said to be related to (denoted by ‘\( \rightarrow \)’) an inconsistency \( \langle c_j, G_n \rangle \) at execution point afterO if the operation O has modified the arguments to the constraint check of \( c_j \) over \( G_n \) as well as the arguments to check \( c_j \) over \( G_3 \).
**Example 2.** In Figure 3, the inconsistency \( e_1 = \langle c_3, \{n5 \rightarrow n8\} \rangle \) at program point 2 (after) is related to \( e_2 = \langle c_3, \{n5 \rightarrow n6\} \rangle \) at program point 1 (before) because the operation of setting of edge for n5 to n8 modifies n5.OUTDEGREE which is an argument to check \( c_3 \) over the subgraph for both \( e_1 \) and \( e_2 \). In other words, \( e_1 \rightarrow e_2 \) is TRUE.
**Definition 3.** Operation O is said to have contributed to (denoted by ‘\( \cdot \)’) an inconsistency \( \langle c_j, G_3 \rangle \) at a program point afterO if the arguments to the constraint check of \( c_j \) over subgraph \( G_n \) at program point beforeO are not equal to the arguments to the constraint check of \( c_j \) over subgraph \( G_3 \) at program point afterO.
**Example 3.** The operation \( n1 \rightarrow \)child[0] = NULL (statement 10) in Figure 3, contributes to the inconsistency \( e = \langle c_3, \{n1 \rightarrow n3\} \rangle \) present in row 3 because it has modified \( n1.OUTDEGREE \) which is an argument to check \( c_3 \).
Our fault location algorithm is presented in Algorithm 1. It begins by initializing the set of pending inconsistencies with inconsistencies at C-point(line 4). The system rolls back memory graph one operation at a time (line 7) and checks if the rolled back operation has contributed to any of the pending inconsistencies (line 10). Statement corresponding to contributing operation is output as faulty (line 11). Inconsistencies in the new MG which are related to the pending inconsistencies are added to the pending set (line 14). Any pending inconsistencies no longer present in the MG are removed from the pending set (line 18). When the set of pending inconsistencies reduces to 0, the algorithm stops. Note that our algorithm only considers operations contributing to inconsistencies as faulty, rather than marking every statement that modifies the inconsistency subgraph as faulty; this helps increase precision.
Figure 4 illustrates the fault localization algorithm for the fault in Figure 1. The set of pending inconsistencies P is initialized with inconsistencies at execution point 4 (C-point) \( P = \{e_1, e_2, e_3\} \). The operation \( O_3: n5 \rightarrow \)child[3] = n9 is not faulty because it does not contribute to any of the inconsistencies in \( P \), i.e, it does not modify arguments to any of the inconsistencies present in \( P \). The
Algorithm 1 Fault Location
1: \( P \): Set of pending inconsistencies;
2: \( S_i \): denotes inconsistencies in \( MG_i \), at execution point \( i \);
3: \( O_i \): Operation on \( MG \) performed at execution point \( i \);
4: \( \text{Check\_Constraints}(MG, C) \): checks constraints in \( C \) for \( MG \)
5: and returns the set of inconsistencies found;
6: \( \text{Roll\_Back}(MG_{i+1}) \): rolls back \( MG_{i+1} \) by one operation.
7: 2: \textbf{INPUT:} Memory Graph \( MG_e \) at execution point \( e \) for a \( C \)-
8: 3: \textbf{point} and constraint specification \( C (c_1, ... c_n) \).
3: \textbf{Fault\_Location()}:
4: \( i \leftarrow e \); \( P = \text{Check\_Constraints}(MG_e, C) \)
5: \( \{ \)
6: \( i \leftarrow i - 1 \)
7: \( MG_i = \text{Roll\_Back}(MG_{i+1}) \)
8: \( S_i = \text{Check\_Constraints}(MG_i, C) \)
9: \( \text{for each } e \in P \text{ do} \)
10: \( \text{if } e \leftarrow O_i \text{ is true then} \)
11: \( \text{Output}(((O_i)) \)
12: \( \text{for each } e' \in S_i \text{ do} \)
13: \( \text{if } (e \in e') \&\& (e' \in e') \text{ then} \)
14: \( P = P \cup \{e'\} \)
15: \( \text{end if} \)
16: \( \text{end for} \)
17: \( \text{if } (e \in S_i) \text{ then} \)
18: \( P = P \setminus \{e\} \)
19: \( \text{end if} \)
20: \( \text{end if} \)
21: \( \} \)
22: \( \text{while } P \neq \{\phi\} \)
next operation \( O_2 : n1 \rightarrow \text{child[0]} = \text{NULL} \) contributes to inconsistencies \( \{c_1, c_2, c_3\} \) in \( P \) and hence is a faulty statement. Thus, \( O_2 \)
23: is added to the \( FS \) set. None of the inconsistencies in \( S_2 \) are related to those in \( P \); hence no new inconsistencies are added to \( P \).
24: The inconsistencies \( \{c_1, c_2, c_3\} \) are removed from \( P \) causing it to become empty and the search for faulty statements terminates. In other words, \( O_2 \) is identified as faulty.
Identifying corrupted data structures. Matching a constraint with the entire program memory graph will lead to false positives due to violations of a data structure constraint when it is evaluated for other unrelated data structures. To avoid this problem we track the identity of the data structure associated with each memory graph node during program execution. Using the data structure identity, we only evaluate constraints relevant to the memory graph node involved avoiding false positives. Furthermore, knowing the identity of the corrupted data structure helps us during trace back for faults as we limit our search only to the corrupted data structure instead of the whole program memory graph.
4. Constraint Specifications
Before introducing our constraint specification language, it is important to mention the following apparent alternatives and explain why we have not used them for our system:
Archie [14] uses constraint-based specification for data structure repair. The specification contains a model the data structure must satisfy. When the model is violated, Archie repairs the data structure to satisfy the model. In our system, the model of the constraints is already fixed in terms of the memory graph. Specifying the model again puts significant extra burden on the programmer.
Alloy [17] is a rich object modeling language for expressing high-level design properties. In comparison, our language is centered around logical, arithmetic, layout and graph constraints at the data structure level.
Programming languages can be used to specify constraints (e.g., repOK [24]). The constraints written need to executed along with the program and are not useful to us as we need to match constraints during the trace back. Writing constraints in programming languages is verbose and error prone.
Our language is based on the same principles as the aforementioned ones but is designed specifically for the purpose of specifying data structure constraints for debugging. Taking on this specific problem makes our language simpler. In our approach, the relevant program state at an execution point is captured by the memory graph, and consistency constraints for a data structure are specified in terms of relationships among nodes and edges of the memory graph. We provide the user with a C-like syntax so the language is easy to use with minimum learning requirement. In this section we first define our constraint specification language and demonstrate that our language is both expressive and simple to use. That is, we can handle a variety data structures with equal or less burden (in comparison to other languages) on the programmer using our specification language.
The memory graph, at each point in the execution, consists of nodes corresponding to allocated memory regions and the edges are formed by pointers between the allocated memory regions. Each node (representing an allocation) has fields corresponding to the fields of the data structure for which the memory was allocated. The structure of the memory graph corresponds to the shape of the data structure; hence violations in data structure constraints can be detected by evaluating those constraints for the memory graph.
Our specification language is designed to provide an easy way to express the structural form of the memory graph for a data structure. In other words, the programmer simply expresses how the data structure can be visualized in the memory, which makes specification writing very intuitive. Specifying data structure constraints involves three steps. First step is specifying the types of nodes in the memory graph. Second (optional) step is specifying any special node attributes which may be involved in the constraints. Third, specifying the constraint using variables of declared types. The grammar of our specification language consists of corresponding three components: structure, model, and constraints (Figure 5).
Structure specification. The nodes in a memory graph can correspond to an array or a structure. Structures are defined in terms of the number of fields and edges present in the memory graph nodes. The structure specification declares the types of the memory graph nodes present in the specifications. The specification of the quad tree in Figure 2 shows that each node has 9 fields and 5 edges.
Example 4. The structure specifications of B-tree and AVL-tree are:
- struct btree{ int count; int key[2]; struct btree * child[3];}
btree FIELD 6 EDGE 3;
- struct avltree{ int val; struct avltree * right, * left;}
avltree FIELD 3 EDGE 2;
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Represents</th>
</tr>
</thead>
<tbody>
<tr>
<td>n.INDEGREE</td>
<td>INT</td>
<td>Indegree of the node n</td>
</tr>
<tr>
<td>n.OUTDEGREE</td>
<td>INT</td>
<td>Outdegree of the node n</td>
</tr>
<tr>
<td>n.EXTERNAL</td>
<td>BOOL</td>
<td>(n.OUTDEGREE == 0)</td>
</tr>
<tr>
<td>n.INTERNAL</td>
<td>BOOL</td>
<td>(n.INDEGREE == 0)</td>
</tr>
<tr>
<td>n.ISROOT</td>
<td>BOOL</td>
<td>(n.INDEGREE == 0)</td>
</tr>
<tr>
<td>n.ISLEAF</td>
<td>BOOL</td>
<td>(n.OUTDEGREE == 0)</td>
</tr>
</tbody>
</table>
Attributes and model specification. We provide several node attributes (shorthands) that simplify the task of writing the specifications and making them concise. Table 1 contains the list of provided node attributes along with their meaning. Standard attributes are valid for any type declared in the structure specification.
When the standard attributes are not adequate, user-defined node attributes are introduced via the model part of the specification language in Figure 5(b). User define node attributes are specific to a node type. Specifying a custom node attribute involves declaring the name (h) of the node attribute along with the node type it is associated with (f). The declaration is followed by the rules for assigning the attribute value for each node. Assignment rules (r) consist of guard (g), terminal assignment (a), and non-terminal assignment (a). A guard is a precondition that, when true, leads to terminal assignment otherwise non-terminal assignment is followed.
Assignment statement is assignment of an arithmetic expression to the node attribute. Note that the user-defined attributes can only be used for acyclic data structures. Therefore, when such attributes are used, our implementation performs an acyclicity check because bugs may lead to formation of cycles in data structures that are supposed to be acyclic. The model specification allows the user to create node attributes corresponding to real world node properties and constraints can be specified in terms of these node properties.
Example 5. The height of an AVL-tree node is specified as:
```plaintext
avltree HEIGHT; avltree X;
```
Similarly, for a red black tree with structure
```plaintext
− struct rbtree { int color; struct avltree * right, * left; }
```
the black height derived from the right child is specified as:
```plaintext
rbtree.BHEIGHT; rbtree X;
```
Constraint specification. Our language allows the user to write both inter-node and intra-node constraints. Inter-node constraints, defined via the grammar in Figure 5(c) are composed of declarations(d), guard (optional), and body(g). A guard is a precondition that must be true in order for the constraint to be applicable. The body is composed of one or more constraint statements joined by the boolean operator AND. Three types of constraint statements are allowed: boolean(be), arithmetic(ac), and connection(ec). Connection statements indicate the following: X → Y (edge allowed), X ↦ Y (edge not allowed), X → Y (path allowed), and X ↛ Y (path not allowed).
Let us consider constraint specifications for the B-tree data structure, shown in Figure 6 top part. The first two constraints ensure that the structure represents a tree and are thus the same for all trees (including Quad-tree shown earlier). Constraint 1 uses a guard (X.ISROOT == FALSE) to identify non-root nodes and indicate that their indegree must be 1. Constraint 2 uses a guard to indicate that if there is a path from X to Y then there is no path from Y to X. The additional constraints in the specification of B-tree follow. Constraint 3 and 4 restrict the outdegrees of internal nodes in B-tree while constraint 5 ensures that the number of children is 1+ number of stored keys (value stored in the first field of the B-tree structure).
Example 6. The balanced height constraint for AVL-tree is expressed using the user-declared node attribute HEIGHT below.
```plaintext
Example 6. The balanced height constraint for AVL-tree is expressed using the user-declared node attribute HEIGHT below.
```plaintext
```
The above examples illustrate that our language is powerful enough to express the constraints embodied by commonly used data structures and at the same time it is intuitive for the program-
mer to use. While we have shown only non-nested data structures, nested structures can be handled by flattening of structure fields.
**Intra-node constraints.** Intra-node constraint specifications are aimed at handling array-based implementations of data structures. Our language supports expressing relationships between array elements. Intra-node constraints, defined via the grammar given in Figure 5(c) are composed of declarations(\(d\)), range(\(q\)), and body(\(b\)). A range gives the \(min\) to \(max\) values of node field index (I) on which the constraint will be applicable. The body of the constraint is a relational expression in terms of the value of the field in question.
**Example 7.** Consider the shard graph representation [22], implemented as an array, storing 8 entries (each entry has source node, source value, edge value, and destination node). The constraint that the source nodes should be ordered is represented as:
\[
\text{ARRAY shard;}
\text{shard X;}
\text{for } i \text{ in 0 to 8, } X[I*4 + 1] < X[(I+1)*4 + 1];
\]
Note that we do not specify the types of structure fields in our specification language. The user can specify constraint to check the type of a field specific to the implementation. Following is an example of type checking for a linked list.
**Example 8.** Consider a linked list implementation using the following structure
```c
struct node{ int value; struct node * next;}
```
Following is the constraint specification for checking the type-safety of the `next` field.
```c
node FIELD 2 EDGE 1;
node X; node Y;
X[2] \neq NULL => (X[2]) == Y;
```
The constraint statement states that if the `next` field of node X is not equal to NULL, it must point to another node Y.
**Comparison with Archie.** We compared specification in our language with that written in Archie [14] for several data structures and summarize the results, i.e., the number of statements required to express various data structures, in Table 2. The table shows the compact and expressive nature of our specification in comparison to Archie. The empty rows indicate that Archie is not able to express three data structures. One of the data structures that Archie cannot specify is the AVL tree for which our specification was shown earlier. Archie cannot handle *global constraints*, e.g., that the difference in heights between the left subtree and right subtree of an AVL tree node should not be more than 1. Our specification allows the user to specify global constraints via user-defined node attributes.

Table 2. Specification size comparison.
<table>
<thead>
<tr>
<th>Data Structure</th>
<th>Number of Statements</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Ours</strong></td>
<td><strong>Archie [14]</strong></td>
</tr>
<tr>
<td>Circular Linked List</td>
<td>3</td>
</tr>
<tr>
<td>Doubly-linked List</td>
<td>2</td>
</tr>
<tr>
<td>Binary Tree</td>
<td>3</td>
</tr>
<tr>
<td>Binary Heap</td>
<td>4</td>
</tr>
<tr>
<td>B-tree</td>
<td>6</td>
</tr>
<tr>
<td>Quad Tree</td>
<td>4</td>
</tr>
<tr>
<td>AVL Tree</td>
<td>6</td>
</tr>
<tr>
<td>Red-Black Tree</td>
<td>9</td>
</tr>
<tr>
<td>Leftist Heap</td>
<td>5</td>
</tr>
<tr>
<td>Full K-ary Tree</td>
<td>4</td>
</tr>
</tbody>
</table>
5. **Optimizations.** Our system is optimized to reduce memory and time overhead. First, we have employed a compact memory graph representation. Using this representation, \(MG\) stores the program state at each program point from 0 to \(t\) in one memory graph. Second, we use incremental constraint checking to reduce constraint matching overhead. Third, we employ prediction to reduce time spent in fault location. Next we briefly describe these techniques.
**Memory Graph Representation.** Memory graphs [30] have been used in prior approaches to facilitate program understanding and detect memory bugs. We employ a unified memory graph representation [34] for capturing the evolution history of the memory graph and hence that of the data structure(s) it represents. From the memory graph at a given program execution point, we can derive its form at all earlier execution points. The graphs provide mappings between changes in the memory graph and the source code statements that caused the changes to assist with fault location. A Memory Graph \(MG = (V, E)\) is defined as follows:
- \(V\) is a set of nodes such that each node \(v \in V\) consists of \((T_e; S_v; H_v)\), where \(H_v\) is a set of heap addresses \(\{h_1^v, h_2^v, \ldots\}\) in ascending order that the node represents. \(T_e\) is the timestamp at which the node was created, i.e., an integer which marks the order of events in the memory graph. The timestamp is initialized at start of memory graph construction and is incremented with each change to the graph. \(S_v\) is the source code statement which led to creation of the node \(v\); the statement is identified by its location in the source code, i.e., `<file name:line number>`. The memory graph also consists of data nodes which contain scalar information and are used to show the value stored at a heap address. An edge from a heap address to the data node implies that the data value in the data node is stored in the heap address.
- \(E\) is a set of directed edges such that edge \(e \in E\) is represented as \(H_a h_b^i \rightarrow v\) and has a label \(\langle T_e; S_v\rangle\), where \(H_a h_b^i\) is the \(i^{th}\) heap address inside node \(a\) and stores a pointer to the heap address of node \(v\); \(T_e\) is the timestamp at which the edge was created; and \(S_v\) is the source code statement that created the edge. An edge may also point to a data node that corresponds to non-heap data, or to NULL. A heap address may point to different nodes at different execution points. The memory graph captures all the corresponding edges and the edge with the largest time stamp represents the current outgoing edge.
Figure 7(b) shows the unified MG that results from executing the Figure 3 program from point 1 to point 4. Each allocation site, in this case each tree element, corresponds to a node in the memory.
Each node consists of a list of heap addresses; we omit them in Figure 7 for simplicity. Each node and edge has a time stamp and a statement number associated with it. The top node \( n_1 \) in the quad tree is: \( \{T=1; S=7; H=\{child[0], child[1], child[2], child[3], point, width, height, posX, posY\}\} \) which means the node was created at time stamp 1 by statement number 7 and has a fields child[0], child[1], child[2], child[3], point, width, height, posX, posY. Note that in the actual memory graph these fields will be heap addresses corresponding to the listed field labels. The edge from node \( n_5 \) to the node \( n_6 \) has label \( \langle T=65; S=37 \rangle \) which means the edge was created at time stamp 65 by statement number 37. The time stamp information enables us to extract the memory graph of the program at any previous execution point (1, 2, and 3) from the unified MG at program point 4.
**Algorithm 2 Memory Graph Rollback**
1: \( MG_{result}, V_f, E_f \leftarrow NULL \)
2: **INPUT:** Memory Graph \( MG_f \) \(< V_f, E_f \rangle \) at time stamp \( ts_{final} \), target time stamp \( ts_t : ts_t \leq ts_{final} \)
3: **Graph_Reconstruct()**
4: for All nodes \( v \in V_f \) and Edges \( e \in E_f \) having time_stamp > \( ts_t \) do
5: \( V_e = V_f - v \)
6: \( E_e = E_f - e \)
7: end for
8: for All Heap address \( h \) such that edge \( e:h \rightarrow v \) was deleted do
9: if \( h \) has an out going edge then
10: Set the edge with highest time stamp out going of \( h \) as the current edge
11: end if
12: end for
13: return \( MG_{result} \)
Algorithm 2 gives a simplified version of rollback process given in [34]. The procedure for rolling back the MG to a previous timestamp, i.e., rolling back to the MG at time stamp \( t \) given a memory graph at time stamp \( t' \) such that \( t < t' \), has two steps. In the first step (line 4-7), all the nodes and edges having time stamp larger than the target time stamp are removed from the graph. In the second step (line 8-12), for the deleted destination nodes, among the outgoing edges from the source address, the edge with the highest time stamp is set as the current edge.
**Incremental constraint checking.** Constraint checking is a critical operation, and containing the cost of checks on large data structures allows our approach to scale well. We reduce the cost of checking via the use of incremental on-demand checks: we keep a mapping between constraint atoms and dependent nodes (nodes involved in the constraint); when the MG is modified, we map modified nodes to affected constraint atoms and invalidate those atoms.
**Efficient traceback.** When tracing back for faults, performing rollback and constraint checking in a naive way would significantly affect scalability due to the high cost of constraint checking. We use two techniques to make trace back efficient. First, knowing which data structure is corrupted, we limit our constraint matching (during trace back) to the data structure in question. Second, we use modification prediction to check if a rolled back operation can contribute to the inconsistencies present in the pending inconsistency list. We perform the rollback and consistency check on the MG only when the prediction for the operation returns true. Modification prediction is based on the spatial locality. We can predict that an operation \( O \) operating on a sub-graph \( G_o \) will not affect an inconsistency \( \langle c, G\rangle \), based on the properties of constraint \( c \) and the relationship between sub-graphs \( G_o \) and \( G \). For example, an operation \( O \) of creating edge \( n_1 \rightarrow n_2 \) will not affect an inconsistency \( \langle c, n_3 \rangle \), where constraint \( c \) checks the value of a node field. We create a list of nodes (dependency list) for each inconsistency \( \langle c, G\rangle \in P \) (set \( P \) in the algorithm 1) based on constraint \( c \). Modification in the a node present in the dependency list can affect the inconsistency. A check is performed if any of the inconsistent nodes for the pending inconsistency list (set \( P \)) is modified by the operation or is dependent on any of the nodes (i.e., in its dependency list) modified by operation \( O \). Algorithm 1 is modified to directly roll back before an operation only when the modification prediction returns true for the operation.
6. **Evaluating Fault Location**
Next we evaluate the precision and cost of our fault location technique. Our implementation consists of a binary instrumenter, constraint matcher generator, memory graph constructor and a fault locator. The binary instrumentation is based on Pin-2.6 [25] – only allocation calls and memory writes are instrumented. To reduce the size of the execution trace, at runtime, Pin keeps track of allocated heap addresses and outputs only the instructions which write allocated heap addresses. The execution trace contains the timestamp information, and the statement identifier for each of allocation and memory writes. We have used the source code location of memory allocation as a unique identifier of the data structure type for each memory graph node as each allocation site belongs to a unique data structure. The execution trace drives the construction of the memory graph. The constraint matcher generator produces the constraint matcher based on the input constraint specifications. If an error is detected during constraint matching, the fault locator searches for the root cause and outputs a list of candidate faulty statements. Measurements were performed on an Intel Core 2 6700 @ 2.66GHz with 4 GB RAM, running Linux kernel version 2.6.32. All benchmarks were written in C.
**Precision of Fault Location.** The strength of our technique lies in its ability to carry out highly precise fault location using a single test case during which constraints are violated – note that program
Table 3. Precision and overhead of fault location.
<table>
<thead>
<tr>
<th>Data structure</th>
<th>Lines of code</th>
<th>Statements examined</th>
<th>Program execution time (ms)</th>
<th>Const. Match. time (ms)</th>
<th>Fault Loc. time (sec)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Ours</td>
<td>Dyn. Slice</td>
<td>Tarantula</td>
<td>Original</td>
<td>Null Pin</td>
</tr>
<tr>
<td>Circular linked list</td>
<td>160</td>
<td>6</td>
<td>7</td>
<td>0.3</td>
<td>645</td>
</tr>
<tr>
<td>Ordered list</td>
<td>172</td>
<td>2</td>
<td>20</td>
<td>0.9</td>
<td>638</td>
</tr>
<tr>
<td>Doubly-linked list</td>
<td>203</td>
<td>5</td>
<td>8</td>
<td>1.6</td>
<td>672</td>
</tr>
<tr>
<td>Quad tree</td>
<td>294</td>
<td>1</td>
<td>58</td>
<td>2.6</td>
<td>716</td>
</tr>
<tr>
<td>AVL tree</td>
<td>243</td>
<td>4</td>
<td>9</td>
<td>1.6</td>
<td>739</td>
</tr>
<tr>
<td>B tree</td>
<td>405</td>
<td>6</td>
<td>8</td>
<td>1.5</td>
<td>722</td>
</tr>
<tr>
<td>Red-Black tree</td>
<td>395</td>
<td>10</td>
<td>24</td>
<td>0.4</td>
<td>661</td>
</tr>
<tr>
<td>Leftist heap</td>
<td>274</td>
<td>1</td>
<td>28</td>
<td>0.4</td>
<td>664</td>
</tr>
<tr>
<td>Bipartite graph</td>
<td>284</td>
<td>2</td>
<td>32</td>
<td>8.2</td>
<td>659</td>
</tr>
</tbody>
</table>
Execution may or may not lead to a program crash. Table 3 presents the results of our fault location technique for implementations of several data structures whose program sizes are given in the second column. In each case, the data structure was first initialized to a base size of 1,000 nodes. Next, 500 operations (inserts and deletes) were performed on the data structure along with random injection of 10 faults. In each case, the injected fault leads to violation of the data structure constraint. The program crash point was used as C-point in cases where program crashed. The end of execution was used as C-point in cases where program terminated normally with wrong output. Column 3 shows the number of faulty statements our technique detected. The numbers of faulty statements range from 1 to 10 while program sizes range from 160 to 405 lines of code. This indicates that our technique narrows down the fault to a very small code region. In all cases the fault was captured by the identified faulty statements. We also computed the dynamic slice of the faulty statement instances. The fourth column shows the number of distinct program statements in the largest dynamic slice among faulty instances' slices. These numbers show that without the knowledge of data structure constraints, the set of potentially faulty statements identified can be quite large (≥ 20 for 6 programs). The fifth column reports the number of statements that must be examined to find the faulty statement using the ranking produced by Tarantula [18]. Tarantula is a statistical technique that uses information from multiple runs on different inputs – we used 1 failing run and 9 successful runs in this experiment. The results show that our approach requires fewer statements to be examined although it is based upon a single run as opposed to Tarantula that used 10 runs.
Overhead of Fault Location. Table 3 column 6-10 gives the overhead in terms of time and space for using our fault location implementation. The 6th column shows the execution time of the buggy program. The 7th and 8th columns show the execution times when the benchmarks run under Pin without instrumentation ("Null Pin" column) and with instrumentation ("Instrumented" column). Although instrumentation overhead is significant, it is acceptable for debugging purposes. The 9th column shows that the time to perform constraint matching (after error detection) is typically a second or less. The 10th column shows the time for fault location—this is very efficient, 0.2–73 seconds in our tests.
7. Experience with Real Programs
For each application we defined consistency constraints for the main data structures. Next, we used fault injection to simulate common programming errors that lead to data structure corruption. Then, using our technique, we identified the buggy statements in the program. Table 4 shows our findings. The execution times are for the buggy version of the programs. Column 4 gives the number of faulty statements our technique found. Our fault location technique captured the faulty statement precisely (less than 5 statements) in all cases. In all applications, the program crash point was used as the single, automatically-inserted C-point, hence programmer effort was limited to specifying constraints and indicating allocation sites for the data structures.
ls. GNU ls [5] lists information about files including directories. The program source code consists of 4,000 lines of C code. It uses a linked list internally to store information about the remaining directories when run in recursive mode. We inserted a bug in the code where the next pointer's value is a non-NULL non-heap address. Due to this bug the program crashes. The violated constraint here is that the next pointer needs to be either a heap address or NULL. As input, we used the GNU coreutils-8.0 source code directory. Our technique traced this fault to 4 statements.
403.gcc. A benchmark program from SPEC CINT 2006 benchmark suite [6], 403.gcc is based on Gcc version 3.2 set to generate code for an AMD Opteron processor. The program uses splay trees, a form of binary search trees optimized to access to recently-accessed elements (splaying is the process of rotating the tree to put a key to the tree root).
To inject a bug, we replaced the right rotate function of the tree by left rotate function. We used the SPEC training input for this experiment, which leads to program crash. The violated constraint here is the binary search tree invariant: Key(root) > Key(left child) and Key(root) < Key(right child). Our method traced the fault to the 1 statement.
464.h264ref. This benchmark program from the SPEC CINT 2006 benchmark suite [6] is a reference implementation of Advanced Video Coding, a video compression standard. The benchmark uses a pointer-based implementation of a multidimensional array, with each dimension being a level of a full and complete tree.
To inject a bug we set an internal pointer to NULL which caused a crash. The SPEC test input was used in this experiment. The violated constraint here, based on the OUTDEGREE attribute, is that the tree is full and complete. Our fault location method traced the fault to 1 statement.
GNU Bison. Bison(3.0.4) [4] is a general-purpose parser generator that converts an annotated context-free grammar into a parser. The application uses a graph to store the symbol table where each node is a token or a grammar non-terminal and the node attributes are assigned accordingly.
Our bug injection simulates a programming error where wrong attributes are assigned to nodes, leading to a crash. We used the C language grammar file as input. The violated constraint here is the
---
1 First row in Table 3 and Table 4 is column numbering.
Scalability of the technique. There are two sources of time overhead incurred by our technique, first the time taken in collecting the execution trace and second the time taken during the trace back. We now explain why both these slowdowns are unavoidable and we believe the overhead is acceptable. First, dynamic analysis (collecting the execution trace) is inherently slow. Table 3 column 7 and Table 4 column 6 list the cost of running application under Pin without any instrumentation which itself is a significant slowdown. We only instrument allocation/deallocation calls and references to allocated regions (section 6, paragraph 1). This limits instrumentation cost while still capturing data structure faults. Second, the alternative to automatically search the program execution trace is manually narrowing down the fault by running the application multiple times. We have introduced a number of optimizations to speed up the trace back as explained in section 5.
Programmers can use our tool for debugging larger programs by capturing the execution trace for just the relevant sections of program execution hence limiting the search space. As shown in the evaluation, our tool can handle large enough search spaces for practical debugging purposes.
Our technique is easily applicable to parallel programs, as that would only require modifications to the Pin-based tracing mechanism. While tracing a multi-threaded program can potentially increase the overhead, this problem can be abated using selective record and replay, e.g., PinPlay [29].
8. Related Work
Our system uses a novel fault location technique and a new specification language. In this section we review previous work related to specification languages, automatic fault location, specification-based testing and uses of memory graph.
Fault location. Various general approaches for fault location that do not rely on data structure information have been developed (e.g., statistical techniques [7, 23, 31], dynamic slicing [21, 36], or combinations of the two [37]). Statistical techniques require running the program on a suite of test cases. Our approach is more comparable with dynamic slicing as they both perform debugging by analyzing a single program run during which a fault is encountered. Jose and Majumdar [19] use MAX-SAT solvers while Sahoo et al. [32] use dynamic backward slicing and filtering heuristics for software fault location. The focus of our work is specifically on data structure errors. Taylor and Black [35] examine a number of structural correction algorithms for list and tree data structures. Bond et al. [9] track back undefined values and NULL pointers to their origin. In contrast, our system concentrates on fault location for violation of high-level data structure properties. We consider the concept of temporary violation of high-level data structure constraints. This helps us locate errors early and trace them to faults precisely.
Specification-based error detection. A wide range of specification techniques have been used to specify correctness properties from which monitors for runtime verification of those properties are generated. For example, MOP [11] allows correctness properties to be specified in LTL, MTL, FSM, or CFG; though MOP is geared more towards verifying protocols or API sequences rather than data structures. Similarly, a specification technique for easily handling memory bugs has also been developed [36]. Our work focuses on data structure correctness and hence is closest to Archie [12] and Alloy [17]. Our data structure specification language differs from languages used by Archie and Alloy: their modeling languages let the developer specify high-level design properties in terms of a model while our language enables developers to express high-level data structure properties in terms of the memory graph of the program. This makes specification writing in our language easy and concise. Berdine et al. [8] and Chang and Rival [10] have introduced predicate-based specification languages for shape analysis. Customized versions of these languages can be used for our technique. Malik et al. [26], Juzi [15] and Demsky et al. [12, 13] use constraint-based error detection for data structure repair while Gopinath et al. [16] combine spectra-based localization with specification matching to iteratively localize faults. Malik et al. [27] proposed the idea of using data structure repair for repairing faulty code. Jung and Clark [20] applied invariant detection on memory graphs to identify the data structures used in the program. Our system uses a similar concept of matching constraints over program state to detect violations. Our approach goes one step further and maps the dynamic data structures constraint violations at runtime back to the source code. Also, we give a method for incremental matching based on the timing information which makes it feasible for the developers to perform constraint checks more frequently. This leads to early detection of errors. Zaeem et al. [28] use history of program execution (field reads and writes) for data structure repair but do not capture structural information.
DITTO [33] performs incremental structure invariant checks for JAVA. It incurs additional overhead for storing all the computations from the last check, as these computations are later used for the incremental checking. Our system reuses the time stamp information from the memory graph and stores only the timestamp of the previous check for incremental constraint matching.
9. Conclusions
We have presented an approach for specifying and verifying consistency constraints on data structures. The constraints are verified during execution using an appropriately constructed memory graph, as well as a suite of optimized checks. If the specification is violated, a fault location component traces the inconsistency to faulty statements in the source code. Experiments with specification and fault location on several widely-used data structures indicate that the approach is practical, effective and efficient. Most importantly, our constraint specification language is both expressive and easy to use.
Acknowledgments
This work is supported by NSF Grants CCF-1318103, CCF-1524852, and CCF-1149632 to UC Riverside.
References
|
{"Source-Url": "http://www.cs.ucr.edu/~gupta/research/Publications/Comp/CC2016Vineet.pdf", "len_cl100k_base": 14716, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 45801, "total-output-tokens": 17717, "length": "2e13", "weborganizer": {"__label__adult": 0.00033283233642578125, "__label__art_design": 0.0003044605255126953, "__label__crime_law": 0.00028133392333984375, "__label__education_jobs": 0.0004646778106689453, "__label__entertainment": 5.7637691497802734e-05, "__label__fashion_beauty": 0.00014698505401611328, "__label__finance_business": 0.00014317035675048828, "__label__food_dining": 0.0003204345703125, "__label__games": 0.0006694793701171875, "__label__hardware": 0.0010528564453125, "__label__health": 0.00040268898010253906, "__label__history": 0.00021314620971679688, "__label__home_hobbies": 9.292364120483398e-05, "__label__industrial": 0.00032639503479003906, "__label__literature": 0.00024080276489257812, "__label__politics": 0.00020563602447509768, "__label__religion": 0.0004317760467529297, "__label__science_tech": 0.01380157470703125, "__label__social_life": 7.37309455871582e-05, "__label__software": 0.00447845458984375, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.0002789497375488281, "__label__transportation": 0.0004398822784423828, "__label__travel": 0.0001804828643798828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67876, 0.05187]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67876, 0.39676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67876, 0.83991]], "google_gemma-3-12b-it_contains_pii": [[0, 5955, false], [5955, 11573, null], [11573, 16559, null], [16559, 24561, null], [24561, 31452, null], [31452, 35416, null], [35416, 41779, null], [41779, 47719, null], [47719, 55509, null], [55509, 61680, null], [61680, 67876, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5955, true], [5955, 11573, null], [11573, 16559, null], [16559, 24561, null], [24561, 31452, null], [31452, 35416, null], [35416, 41779, null], [41779, 47719, null], [47719, 55509, null], [55509, 61680, null], [61680, 67876, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67876, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67876, null]], "pdf_page_numbers": [[0, 5955, 1], [5955, 11573, 2], [11573, 16559, 3], [16559, 24561, 4], [24561, 31452, 5], [31452, 35416, 6], [35416, 41779, 7], [41779, 47719, 8], [47719, 55509, 9], [55509, 61680, 10], [61680, 67876, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67876, 0.19261]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
3a417f7ae31982102a6212171271f6a79006a8d2
|
Combining the functional and the relational model
Citation for published version (APA):
Document status and date:
Published: 01/01/1990
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Download date: 11. Nov. 2019
Combining the functional and the relational model
by
A.T.M. Aerts P.M.E. de Bra K.M. van Hee
90/9
October, 1990
This is a series of notes of the Computing Science Section of the Department of Mathematics and Computing Science Eindhoven University of Technology. Since many of these notes are preliminary versions or may be published elsewhere, they have a limited distribution only and are not for review. Copies of these notes are available from the author or the editor.
Combining the Functional and the Relational Model
A. T. M. Aerts, P. M. E. De Bra and K. M. van Hee
Eindhoven University of Technology
August 31, 1990
Abstract
This paper combines the expressive power of the Functional and the Relational Model. While the Functional Data Model is a powerful and intuitive tool for translating a real-world situation into a data model, the Relational Model has been studied much more extensively, resulting in a thorough knowledge of the properties of constraints, and the expressive power of query and update languages.
Given the existence of a Relational Representation Scheme for every Functional Structure Scheme we can combine a data language for the relational model with that of the Functional Model to obtain a richer data language for the combined Functional and Relational Data Model.
In this paper we present a first order language which operates on the combined Functional and Relational scheme. In this (formal) language one can easily formulate constraints, queries and updates, mixing functional and relational constructs.
1 Introduction
The Relational Database Model [3, 4, 6] provides a solid theoretical background for reasoning about databases, for formulating queries and updates, and for describing constraints. However, modeling a real-world situation using a relational database is non-trivial at best. One cannot easily and intuitively define relations and constraints that form a useful and efficient representation of the real-world situation. The Functional Data Model does provide such an intuitive tool, and has been proven successful in many database-design projects performed by students. A graphical representation of objects, functions, and several types of constraints enables the designer to generate a visualization of the database scheme that can easily be understood by non-experts.
In order to use the Functional Data Model in real applications, an algorithm has been developed to generate a relational database scheme from a functional scheme. [2] This algorithm is not entirely automatic: the user sometimes has to make a choice whether or not to generate separate relations, mostly when resolving so called is_a relationships. In any case the algorithm produces a relational database scheme in Boyce-Codd Normal Form.
The approach until now has always been to model the real world using the functional model, then convert the functional scheme to a relational scheme, and then define constraints, queries and updates on the relational scheme. This is not a desirable approach, since the functional scheme already represents some types of constraints, which have to be reformulated using the relational scheme.
In this paper we describe a new (mathematical) data language, which operates on both the functional and the relational representation of a database. This not only enables the designer to select his preferred model for formulating a query, update or constraint, but also to define them using both representations in one and the same expression.
Because of the subject, and to limit the size of this paper, we assume that the reader is aware of the basic definitions of both the functional data model [2] and the relational model [4, 6].
2 Formalism
The structure of a functional data model for an object system — the part of the real world that is of interest to us — is specified by giving a structure scheme:
Definition 2.1 Structure Scheme
A structure scheme is a 4-tuple \( <O, P, C, W> \) where
\( O \): a finite set of names of object types.
\( P \): a triple specifying a finite set of property types; \( P = <F, D, R> \), where
\( F \) is a finite set of names of property types: \( O \cap F = \emptyset \).
\( D \) is a function which maps a property type to the object type which is called the domain type of the property type; so \( D \in F \rightarrow O \).
$R$ is a function which maps a property type to the object type which is called the range type of the property type; so $R \in F \rightarrow O$.
$C = <Q, U, X>$, a triple specifying standard constraints:
$Q$: a function which assigns to every property type a number of attributes:
\[ Q \in F \rightarrow V_{\text{is-a}} \cup \prod \{ \{<\text{total}, \{\top, \bot\}>, <\text{injective}, \{\top, \bot\}>, <\text{surjective}, \{\top, \bot\}> \} \} \]
$U$: a function which assigns to an object type $o \in O$ the subsets of $D^{-1}(o)$ of names of property types, that are called the keys of this object type, so $U \in O \rightarrow \mathcal{P}(\mathcal{P}(F))$.
$X$: a function which assigns to an object type $o \in O$ the subsets of $D^{-1}(o)$ of names of property types, that have mutually exclusive domains, so $X \in O \rightarrow \mathcal{P}(\mathcal{P}(F))$.
$W$: a set valued function with $\text{dom}(W) = O$, called the object world function. If for $f \in F$ it holds that $Q(f) \in V_{\text{is-a}}$ then $W(D(f)) \subseteq W(R(f))$
The components $O, P$ and $C$ specify the object types and the property types, that are included in the data model, and a number of general constraints these types have to satisfy. $W$ is a function which associates a set of real world objects with (the name of) an object type. These sets do not have to be disjoint. On the contrary, if two object types are related to one another by an is-a relationship, the set of real world objects corresponding to the subtype is required to be a subset of the set of objects for the supertype.
The $O$ and $P$ components of the structure scheme specify a labeled, directed graph, in which the object types are nodes and the property types correspond to labeled, directed edges. Not every graph specified this way is acceptable as a structure scheme. When we select the subgraph $<N, E>$ based on the property types with an is-a-label with edges $E = \{ f \in F \mid Q(f) \in V_{\text{is-a}} \}$ and nodes $N = \{ o \in O \mid \exists \{ f \in E \mid o = D(f) \vee o = R(f) \} \}$, this subgraph has to be free from cycles. Furthermore, if there is no directed path from an object type $o_1 \in N$ to an object type $o_2 \in N$ then $o_1$ and $o_2$ correspond to disjoint sets of objects: $W(o_1) \cap W(o_2) = \emptyset$.
The is-a properties are required to be total and injective. Therefore we do not repeat them in $U$ like the other injective properties.
**Example 2.1 Student-Professor database**
Figure 1 shows a small database, concerning students and professors:
The components $<O, P, C, W>$ in this example are:
$O = \{\text{address, name, date, person, professor, student, amount}\}$
$P = <F, D, R>$, where
$F = \{\text{lives at, called, born on, p-isa, s-isa, loan, scholarship}\}$
$D = \{(\text{lives at, person}),(\text{called, person}),(\text{born on, person}),(\text{p-isa, professor}),$ \( (\text{s-isa, student}),(\text{loan, student}),(\text{scholarship, student}) \)$
\[ \prod(P) = \{ p \mid p \text{ is a function with domain } \text{dom}(P) \text{ and } \forall x \in \text{dom}(P) : p(x) \in P(x) \} \]
also, $\top = \text{true}, \bot = \text{false}; V_{\text{is-a}}$ is a set of (is-a-)labels, disjoint from $F$ and $O$.
---
1 This symbol denotes the Generalized Product: Let $P$ be a set valued function, then
\[ \prod(P) = \{ p \mid p \text{ is a function with domain } \text{dom}(P) \text{ and } \forall x \in \text{dom}(P) : p(x) \in P(x) \} \]
also, $\top = \text{true}, \bot = \text{false}; V_{\text{is-a}}$ is a set of (is-a-)labels, disjoint from $F$ and $O$.
3
Figure 1: Functional Student-Professor database scheme
\[ R = \{(lives at, address), (called, name), (born on, date), (p-isa, person), (s-isa, person), (loan, amount), (scholarship, amount)\} \]
\[ C = <Q, U, X>, \text{ where} \]
\[ Q = \{(lives at, (< \text{total}, \top>, < \text{injective}, \bot>, < \text{surjective}, \bot>)), (called, (< \text{total}, \top>, < \text{injective}, \bot>, < \text{surjective}, \bot>)), (born on, (< \text{total}, \top>, < \text{injective}, \bot>, < \text{surjective}, \bot>)), (\text{loan}, (< \text{total}, \bot>, < \text{injective}, \bot>, < \text{surjective}, \bot>)), (\text{scholarship}, (< \text{total}, \bot>, < \text{injective}, \bot>, < \text{surjective}, \bot>)), (p-isa, \text{is-a}, \text{professor}), (s-isa, \text{is-a}, \text{student})\} \]
\[ U = \{(\text{person, \{called, born on\}})\} \]
\[ X = \{(\text{student, \{loan, scholarship\}})\} \]
\[ W = \text{the function linking the object types in the database to the corresponding set of objects in the real world. This basically is the "meaning" of the database.} \]
From \( U \) we see (as shown in Figure 1) that the key properties for a person are his (her) name (property called) and date of birth (property born on).
\( X \) tells us that a student cannot have both a loan and a scholarship at the same time. In some sense, having such properties which exclude each other is an alternative to creating subtypes (using is_a properties) in some simple cases. We could have created subtypes student-with-loan and student-with-scholarship, but even that would not necessarily have meant that a student cannot have both a loan and a scholarship, just as a person can be both a professor and a student at the same time.
Given a structure scheme, we can describe the states of the object system in terms of a graph called the state graph. The nodes in this graph correspond to real world objects, the edges to the properties that relate pairs of real world objects.
**Definition 2.2 State Graph**
For a structure scheme \(<O, P, C, W>\) a **state graph** is a function \(g\) such that
1. \( \text{dom}(g) = O \cup F. \)
2. \( \forall o \in O: g(o) \subseteq W(o) \) and \( g(o) \) is finite.
3. \( \forall f \in F: g(f) \in g(D(f)) \neq g(R(f)).^2 \)
4. \( \forall f \in F: Q(f) \cdot \text{total} = T \Rightarrow g(f) \) is total \( \land \)
\( Q(f) \cdot \text{injective} = T \Rightarrow g(f) \) is injective \( \land \)
\( Q(f) \cdot \text{surjective} = T \Rightarrow g(f) \) is surjective \( \land \)
\( Q(f) \in V_{\text{is.a}} \Rightarrow g(f) \) is total and injective.
5. \( \forall f, h \in F: Q(f) \in V_{\text{is.a}} \land Q(h) \in V_{\text{is.a}} \land Q(f) = Q(h) \)
\( \Rightarrow \ (R(f) = R(h) \land \text{rng}(g(f)) \cap \text{rng}(g(h)) = \emptyset). \)
6. \( \forall o \in O: \forall \text{Key} \in U(o): \forall x, y \in \bigcap_{h \in \text{Key}} \text{dom}(g(h)):
\left( \forall f \in \text{Key} : g(f)(x) = g(f)(y) \right) \Rightarrow x = y. \)
7. \( \forall o \in O: \forall \text{Excl} \in X(o): \forall f_1, f_2 \in \text{Excl}:
f_1 \neq f_2 \Rightarrow \left( \text{dom}(g(f_1)) \cap \text{dom}(g(f_2)) = \emptyset \right) \)
Basically what this definition says is that a state graph (and consequently the database instance, see Definition 2.5) must satisfy the constraints of the structure scheme.
Using a structure scheme we can describe the structure of the object system. In the next step of designing a relational database we need to specify how we will represent the objects in terms of relations and attributes.
**Definition 2.3 Relational Representation Scheme**
A relational representation scheme for a structure scheme \( <O, P, C, W> \) is a 6-tuple \( <E, A, V, I, H, T> \) where:
\( A: \) a finite set of names of attributes
\( V: \) a function with \( \text{dom}(V) = A, \) which maps every attribute name \( a \in A \) to a set of values \( V(a), \) called attribute range, sometimes also called the domain of the attribute.
\( H: \) a function, which maps every object type \( o \in O \) to a set of attribute names \( H(o) \subseteq A, \) so \( H \in O \rightarrow \mathcal{P}(A). \)
\( I: \) a function, which maps every object type \( o \in O \) to a set of attribute names \( I(o), \) which is called the primary key of \( o, \) so \( I \in O \rightarrow \mathcal{P}(A) \) and \( \forall o \in O: I(o) \subseteq H(o). \)
\( E: \) a function, which maps every property type \( f \in F \) to an injective function of attribute names to attribute names, such that:
\(^2\)The symbol \( \neq \) is used to denote a partial function.
dom(E) = F and
\forall f \in F : E(f) \in I(R(f)) \rightarrow H(D(f)) \text{ and }
\forall f \in F : \forall a \in I(R(f)) : V(a) = V(E(f)(a)) .
T : a function which maps each object type to a function which maps the identifying (primary key-) part of the representation of a real world object to the object itself, so dom(T) = O, and
\forall o \in O : T(o) \in \prod(V \upharpoonright I(o)) \rightarrow W(o). \quad \square
Contrary to what people usually believe a (relational) representation scheme is not just an equivalent scheme in another formalism. It does not represent all information present in the structure scheme, (in particular, it does not repeat the constraints of the structure scheme,) but also, the structure scheme does not uniquely determine the relational representation scheme. The (functional) structure scheme is created in an early design phase, whereas the (relational) representation scheme is developed at a later stage, when getting closer to an implementation, and when filling in more details, like adding additional attributes. So the representation scheme contains new information. To clarify this in our example, we present a relational representation scheme for the Student-Professor database:
Example 2.2 Relational Student-Professor database
The components < E, A, V, I, H, T > of a possible relational representation scheme are:
A = {street, state, zip, city, address, name, birthdate, SS#, empno, studno, loan, scholarship}
These are the names of the attributes we will use in our relations.
V = {(street, strings), (state, char[2]), (zip, numbers), (city, strings), (address, numbers),
(name, strings), (birthdate, dates), (SS#, numbers), (empno, numbers),
(studno, numbers), (loan, money), (scholarship, money)}
where strings, char[2], numbers, dates, money denote the sets of all possible strings,
strings of 2 characters, numbers, dates, and amounts of money.
H = {(address, {address, street, state, zip, city}), (person, {SS#, address, name, birthdate}),
(professor, {empno, SS#}), (student, {studno, SS#, loan, scholarship})}
H defines headings of the tables. In principle the definition states that a heading must
be produced for every object type. However, when no information is lost (this is for the
designer to decide) the headings for tables with only one attribute may be omitted. We
have chosen here not to define relations for the object types name, date and amount, as
the corresponding information can be found in the person or student relations. Note also
that we have added attributes (empno and studno) which are not present in the structure
scheme. It is common to omit attributes in a structure scheme to trim down the graphical
representation. (An alternative is to cut the scheme into logically meaningful subschemes.)
I = {(address, {address}), (person, {SS#}), (professor, {empno}), (student, {studno})}
I defines the primary keys. We have chosen the internal representation of an address, the
social security number of a person, the empno of a professor and the studno of a student.
(Again we have omitted the objects with only a single attribute, though the definition requires them.) Note that the objects may have more than one key. The relational representation scheme does not have an equivalent to $U$, the set of all keys for the objects, as present in the structure scheme.
$$E = \{(\text{lives at}, \{(\text{address, address})\}), (\text{p-isa}, \{(\text{SS#, SS#})\}), (\text{s-isa}, \{(\text{SS#, SS#})\})\}$$
We again omit the renaming functions for the relations with only a single attribute: name, date and amount. In our example the renaming does nothing. But it is possible to use different names for the "same" attribute in two relations. For instance, we could have renamed the SS# for a person to "tax-id" in the professor relation.
$$T = \text{a complicated function which maps each object to a function from the set of possible primary-key values for the representation of an object to the set of real-world objects. Basically, this is the equivalent of the object world function, but now for the relational representation. If the "person" object with name "John Doe", and born on 1/1/1950 refers to a real person, then the tuple in the person-table with name "John Doe" and date of birth 1/1/1950 must refer to the same real person.}$$
Note that the relational representation may involve the use of null values. This may happen when some functions are not total and it is guaranteed to happen when there are properties with mutually exclusive domains. In our example when a student has a loan, the scholarship-attribute must be null (and vice versa). These are so-called non-existence nulls, i.e. the value does not exist. (This in contrast to the more common interpretation of "value exists but is unknown")
**Definition 2.4 Conceptual Model**
A conceptual model is a pair consisting of a structure scheme and a representation scheme.
**Definition 2.5 Database State**
For a conceptual model $\ll O, P, C, W, , < E, A, V, I, H, T >$ a database state is a function $s$, such that:
1. $\text{dom}(s) = O$.
2. $\forall o \in O : s(o) \subseteq \bigcup(V \uparrow H(o))$.
3. There is a state graph $g$ with $\text{dom}(g) = O \cup F$ and
$\forall o \in O : g(o) = T(o)(s(o) \uparrow I(o))$
$\forall f \in F : g(f) = \{<x, y> \in g(D(f)) \times W(R(f)) \mid \exists z \in s(D(f)) : T(D(f))(z \uparrow I(D(f))) = x \wedge T(R(f))(z \circ E(f)) = y\}$.
Basically, this definition says that a database state is a (function mapping the set of names of object types to a) state graph and a corresponding set of relation instances (i.e. tables). By tying a database state to a state graph the constraints, expressed in the structure diagram, are imposed on the relational instances.
At any one time there is a one to one relationship between the database state $s$ and the corresponding state graph $g$. We call $g$ the state graph of $s$. 7
Definition 2.6 State Space
Given a conceptual model \( \langle O, P, C, W \rangle, \langle E, A, V, I, H, T \rangle \), the state space \( S \) is the set of database states for that model.
We can now define constraints on a conceptual model, and query and update database states.
3 The Data Language
Using the functional and the relational representation of a database we propose a new (formal) language for defining constraints, queries and updates:
Definition 3.1 First Order Language
Let \( \langle O, P, C, W \rangle, \langle E, A, V, I, H, T \rangle \) be a conceptual model, with \( ID = \bigcup \{ V(a) \mid a \in A \} \) and \( T' = \bigcup \{ [V \uparrow (u)] \mid u \subseteq A \} \).
Its first order language (FOL) \( L_F \) then consists of the following elements:
i) Alphabet
The alphabet is the union of the following sets of symbols:
- constants: \( ID \cup T' \cup O \cup F \cup A \)
- variables: \( \{ X, Y, Z, X_1, Y_1, Z_1, \ldots \} \)
- function symbols: \( \{ \circ, \cdot, ^{-1}, \text{dom, rng} \} \)
- set symbols: \( \{ \cup, \cap, \setminus, \div, \infty \} \)
- atom comparison symbols: \( \{ \leq, =, \geq \} \)
- set comparison symbols: \( \{ \subseteq, =, \not\subseteq \} \)
- tuple symbols: \( \{ \cup, \circ, \cdot, \text{dom} \} \)
- projection symbols: \( \{ [\ ], [ ] \} \)
- atom-set symbols: \( \{ \in \} \)
- logical symbols: \( \{ \land, \lor, \neg, \Rightarrow, \Leftrightarrow \} \)
- quantors: \( \{ \forall, \exists, \$ \} \)
- punctuation symbols: \( \{ [, ], \{ , \}, ( , ) , ; , :, \ldots , \} \)
ii) Terms
a-terms (defining a constant or the value of a tuple for an attribute)
- every \( a \in ID \) is an a-term
- if \( t \) is a t-term and \( a \) an at-term then \( t(a) \) is an a-term
at-terms (defining attributes)
- every \( a \in A \) is an at-term
- if \( f \) is an af-term and \( a \) is an at-term then \( f(a) \) is an at-term
t-terms (defining tuple constants and variables)
every $t \in T$ is a t-term
- every variable is a t-term
- if $a_1, \ldots, a_n \ (n \in \{1, 2, \ldots\})$ are at-terms and $b_1, \ldots, b_n$ are a-terms then
$\{(a_1; b_1), \ldots, (a_n; b_n)\}$ is an (enumerated) t-term
- if $f$ is an f-term and $t$ is a t-term then $f(t)$ is a t-term
- if $t_1$ and $t_2$ are t-terms then $t_1 \cup t_2$ is a t-term
- if $t$ is a t-term and $a$ an as-term then $t \uparrow a$ is a t-term
- if $f$ is an af-term and $t$ is a t-term then $t \circ f$ is a t-term
s-terms (defining sets)
- every $o \in O$ is an s-term
- if $t_1, \ldots, t_n$ are t-terms then $\{t_1, \ldots, t_n\}$ is an (enumerated) s-term
- if $f$ is an af-term and $s$ is an s-term then $s \circ f$ is an s-term (this means overloading
the $\circ$ operator so it applies to sets)
- if $f$ is an f-term and $t$ a t-term then $f^{-1}(t)$ is an s-term
- if $f$ is an f-term and $s$ is an s-term then $f(s)$ and $f^{-1}(s)$ are s-terms
- if $s_1$ and $s_2$ are s-terms and $\theta$ is a set symbol then $s_1 \theta s_2$ is an s-term
- if $s$ is an s-term and $a$ is an as-term then $\prod_s(a)$ is an s-term
- if $X$ is a variable and $s$ is an s-term and $q$ a predicate then $\{X : s \mid q\}$ is an s-term
- if $X$ is a variable, $s$ is an s-term, $q$ is a predicate and $t$ is a t-term with at most
$X$ as a free variable then $\{X : s \mid q \mid t\}$ is an s-term
- if $f$ is an f-term, then $\text{dom}(f)$ and $\text{rng}(f)$ are s-terms
f-terms (defining functions)
- every $f \in F$ is an f-term
- if $a_1, \ldots, a_n, b_1, \ldots, b_n$ are t-terms then $\{(a_1; b_1), \ldots, (a_n; b_n)\}$ is an
(enumerated) f-term
- if $f$ and $g$ are f-terms then $f \circ g$ is an f-term
- if $f$ is an f-term and $s$ an s-term then $f \uparrow s$ is an f-term
as-terms (defining sets of attributes)
- if $a_1, \ldots, a_n$ are at-terms then $\{a_1, \ldots, a_n\}$ is an (enumerated) as-term
- if $s_1$ and $s_2$ are as-terms and $\theta$ is a set symbol then $s_1 \theta s_2$ is an as-term
- if $f$ is an af-term then $\text{dom}(f)$ and $\text{rng}(f)$ are as-terms
- if $f$ is an af-term and $s$ an as-term then $f(s)$ and $f^{-1}(s)$ are as-terms
- if $t$ is a t-term then $\text{dom}(t)$ is an as-term
af-terms (defining attribute functions)
- if $a_1, \ldots, a_n, b_1, \ldots, b_n$ are at-terms then $\{(a_1; b_1), \ldots, (a_n; b_n)\}$
is an (enumerated) af-term
- if $f$ and $g$ are af-terms then $f \circ g$ is an af-term
- if $f$ is an af-term and $s$ an as-term then $f \uparrow s$ is an af-term
- if $f$ is an af-term then $f^{-1}$ is an af-term
terms
- a-, as-, at-, af-, f-, s- and t-terms are terms
- if t is a term then (t) is also a term
- there are no other terms
iii) Predicates
- if $a_1$ and $a_2$ are a-terms and $\theta$ is an atom comparison symbol then $a_1 \theta a_2$ is a predicate
- if $t$ is a t-term and $s$ an s-term then $t \in s$ is a predicate
- if $s_1$ and $s_2$ both are t-, f-, af-, as- or s-terms and $\theta$ a set comparison symbol, then $s_1 \theta s_2$ is a predicate
- if $q_1$ and $q_2$ are predicates and $\theta$ a logical symbol, different from $\neg$ then $(q_1 \theta q_2)$ is a predicate
- if $q$ is a predicate then $\neg q$ is a predicate
- if $X$ is a variable, $q$ a predicate with at most $X$ as a free variable
and $s$ is an s-term then $\forall [X : s | q]$ and $\exists [X : s | q]$ are predicates
- there are no other predicates
In order to use this language we have to define the meaning (or interpretation) of the different terms and predicates. We do this both formally and informally.
**Definition 3.2 Interpretation**
Let $\langle O, P, C, W \rangle, \langle E, A, V, I, H, T \rangle$ be a conceptual model with first order language $L_F$. Let $L_T$ be the set of terms without free variables and $L_C$ the set of closed predicates. Furthermore, let $S$ be the state space for this model. The interpretation function $\mathcal{I}_s$ satisfies$^3$:
- $\mathcal{I} \in S \times (L_T \cup L_C) \rightarrow ID \cup P(\Pi) \cup (\Pi \rightarrow \Pi) \cup A \cup \mathcal{P}(A) \cup (A \rightarrow A) \cup \{T, *, \perp\}$, where $*$ is the truth-value for undefined.
This means that the interpretation of a term without free variables or a closed predicate for a given database state can be an element of the domain of an attribute or a tuple or a set of tuples or a function between tuples or an attribute or a set of attributes or a function between attributes or a (3-valued) truth-value. The interpretation of specific language constructs is described below.
- if $x \in ID \cup TT \cup A$ then $\mathcal{I}_s(x) = x$
The interpretation of a value, tuple or attribute is itself.
- if $t$ is a t-term and $a$ an at-term then $\mathcal{I}_s(t(a)) = \mathcal{I}_s(t)(\mathcal{I}_s(a))$
Given a tuple $t$ and an attribute $a$, the interpretation of $t(a)$ is the interpretation of $t$ applied to the interpretation of $a$.
- if $f$ is an af-term and $a$ an at-term then $\mathcal{I}_s(f(a)) = \mathcal{I}_s(f)(\mathcal{I}_s(a))$
The interpretation of the application of an attribute-function to an attribute is straightforward (and yields an attribute).
$^3$We have underlined the "terminal" symbols. Apart from the grammatical aspect of being terminal, this also means that these symbols take their normal mathematical meaning. Also, we sometimes use $s$ to indicate an s-term. This $s$ is not to be confused with the database state $s$, which occurs as suffix in $\mathcal{I}_s$.
10
if \( a_1, \ldots, a_n \) are at-terms and \( b_1, \ldots, b_n \) are a-terms then
\[
T_s\left(\{(a_1; b_1), \ldots, (a_n; b_n)\}\right) = \{ (T_s(a_1), T_s(b_1)), \ldots, (T_s(a_n), T_s(b_n)) \}
\]
Given a series of attributes and values, this is how we create a single tuple.
- if \( o \in O \) then \( T_s(o) = s(o) \)
The interpretation of an object is its “value” in the database state. This value is a tuple in the table corresponding to the object’s type.
- if \( f \in F \) then \( T_s(f) = \{(x; y) \mid x \in s(D(f)) \land y \in s(R(f))\} \)
The interpretation of a function from the structure scheme (i.e. a function from object types to object types) is a function in the representation scheme (i.e. a function from tuple types to tuple types) such that when an object in the state graph \( g \) for \( s \) is mapped to another object, the tuple in the database state, corresponding to the first object, is mapped to the tuple corresponding to the second object. (see Definition 2.5, item 3)
- if \( f \) is an f-term and \( t \) is a t-term then \( I_s(f(t)) = I_s(f)(I_s(t)) \)
If \( f \) is a function between objects and \( t \) a tuple, then \( f(t) \) is the corresponding function \( I_s(f) \) on tuples, applied to \( t \).
- if \( t_1, t_2 \) are t-terms then \( T_s(t_1 \cup t_2) = T_s(t_1) \uplus T_s(t_2) \)
This generalizes functions on tuples to functions on sets of tuples.
- if \( t \) is a t-term and \( a \) an as-term then \( I_s(t \uparrow a) = I_s(t) \uparrow I_s(a) \)
This defines the projection of a tuple \( t \) onto the set of attributes \( a \).
- if \( f \) is an af-term and \( t \) is a t-term then \( I_s(t \circ f) = T_s(t) \circ I_s(f) \)
This defines the renaming of attributes in a tuple.
- if \( t_1, \ldots, t_n \) are t-terms then \( T_s\{\{t_1, \ldots, t_n\}\} = \{T_s(t_1), \ldots, T_s(t_n)\} \)
This defines (enumerated) sets of tuples.
- if \( f \) is an af-term and \( s \) is an s-term then \( I_s(s \circ f) = \{ t \circ I_s(f) \mid t \in I_s(s) \} \)
This generalizes functions on tuples to functions on sets of tuples.
- if \( f \) is an f-term and \( t \) a t-term then \( I_s(f^{-1} t) = I_s(f)^{-1} I_s(t) \)
If we apply a function inversely to one tuple, we obtain a (possibly empty) set of tuples, i.e. an s-term. Note that this is a non-standard way of using \( -1 \) as we are using non-injective functions, so \( f^{-1} \) is not a function any more.
- if \( f \) is an f-term and \( s \) an s-term then \( I_s(f(s)) = \{I_s(f)(t) \mid t \in I_s(s)\} \)
The interpretation of a function applied to a set of tuples is the set of tuples which are the result of applying the interpretation of \( f \) to each of the tuples of \( s \).
- if \( f \) is an f-term and \( s \) an s-term then \( I_s(f^{-1} s) = \cup \{I_s(f)^{-1}(t) \mid t \in I_s(s)\} \)
If we apply a function inversely on a set of tuples (which always contains tuples of only one tuple type), then the image is the set of inverse images of the tuples, i.e. an s-term. This again is a non-standard use of the \( -1 \) operator.
- if \( s_1 \) and \( s_2 \) are s-terms and \( \theta \) is a set symbol then \( I_s(s_1 \theta s_2) = I_s(s_1) \theta I_s(s_2) \)
This means that the set symbols, applied to sets of tuples, take their normal meaning (in
the relational algebra). Note that this meaning can be undefined (i.e. *) if the sets are not "compatible" for the operation \( \theta \).
- if \( s \) is an s-term and \( a \) is an as-term then \( \mathcal{I}_s(\prod a(s)) = \{ t \mid \mathcal{I}_s(a) \mid t \in \mathcal{I}_a(s) \} \)
This defines the projection of a set of tuples onto a set of attributes.
- if \( X \) is a variable, \( s \) an s-term, \( q \) a predicate with at most \( X \) as a free variable then \( \mathcal{I}_s(\{ X : s \mid q \}) = \{ y \in \mathcal{I}_s(s) \mid \mathcal{I}_s(q_X^y) = \top \} \)
This defines the subset of those tuples of \( s \) that satisfy predicate \( q \).
- if \( f \) is an f-term and \( \theta \in \{ \text{dom}, \text{rng} \} \) then \( \mathcal{I}_s(\theta(f)) = \theta(\mathcal{I}_a(f)) \)
The domain and range of a function are sets of tuples.
- if \( a_1, \ldots, a_n, b_1, \ldots, b_n \) are t-terms then \( \mathcal{I}_s(\{(a_1; b_1), \ldots, (a_n; b_n)\}) = \{(\mathcal{I}_a(a_1); \mathcal{I}_a(b_1)), \ldots, (\mathcal{I}_a(a_n); \mathcal{I}_a(b_n))\} \)
This is an enumerated function between tuples.
- if \( f \) and \( g \) are f-terms then \( \mathcal{I}_s(f \circ g) = \mathcal{I}_s(f) \circ \mathcal{I}_s(g) \)
The \( \circ \) operator keeps its mathematical meaning for concatenating functions.
- if \( f \) is an f-term, \( \theta \in \{ \text{dom}, \text{rng} \} \) then \( \mathcal{I}_s(\theta(f)) = \theta(\mathcal{I}_a(f)) \)
The \( \circ \) operator keeps its mathematical meaning for restricting the domain of a function.
- if \( a_1, \ldots, a_n \) are at-terms then \( \mathcal{I}_s(\{a_1, \ldots, a_n\}) = \\{\mathcal{I}_a(a_1), \ldots, \mathcal{I}_a(a_n)\} \)
This is how we define an enumerated set of attributes.
- if \( s_1 \) and \( s_2 \) are as-terms and \( \theta \) is a set symbol then \( \mathcal{I}_s(\{ s_1, \theta s_2 \}) = \mathcal{I}_a(s_1) \circ \mathcal{I}_a(s_2) \)
The set symbols keep their mathematical meaning for sets of attributes.
- if \( f \) is an af-term and \( \theta \in \{ \text{dom}, \text{rng} \} \) then \( \mathcal{I}_s(\theta(f)) = \theta(\mathcal{I}_a(f)) \)
The domain and range of an attribute-function are sets of attributes.
- if \( f \) is an af-term and \( s \) an as-term then \( \mathcal{I}_s(f(s)) = \mathcal{I}_a(f) \circ \mathcal{I}_a(s) \)
and \( \mathcal{I}_s(f^{-1}(s)) = \mathcal{I}_a(f)^{-1} \circ \mathcal{I}_a(s) \)
This is the straightforward generalization of the renaming of attributes (and its inverse) to sets of attributes.
- if \( t \) is a t-term then \( \mathcal{I}_s(\text{dom}(t)) = \text{dom}(\mathcal{I}_a(t)) \)
The domain of a tuple is a set of attributes.
- if \( a_1, \ldots, a_n, b_1, \ldots, b_n \) are at-terms then \( \mathcal{I}_s(\{(a_1; b_1), \ldots, (a_n; b_n)\}) = \{(\mathcal{I}_a(a_1); \mathcal{I}_a(b_1)), \ldots, (\mathcal{I}_a(a_n); \mathcal{I}_a(b_n))\} \)
This defines an (enumerated) function on attributes.
- if \( f \) and \( g \) are af-terms then \( \mathcal{I}_s(f \circ g) = \mathcal{I}_a(f) \circ \mathcal{I}_a(g) \)
The concatenation of two renamings is still a renaming.
- if \( f \) is an af-term and \( s \) an as-term then \( \mathcal{I}_s(f \uparrow s) = \mathcal{I}_a(f) \uparrow \mathcal{I}_a(s) \)
The restriction of a renaming to a subset of the attributes is still a renaming.
- if \( f \) is an af-term then \( \mathcal{I}_s(f^{-1}) = \{(x; y) \mid (y; x) \in \mathcal{I}_s(f)\} \).
- if \( t \) is a term then \( \mathcal{I}_s(t) = \mathcal{I}_s(t) \)
This says that parentheses have no meaning (other than to indicate grouping).
- if \( a_1 \) and \( a_2 \) are a-terms and \( \theta \) is an atom comparison symbol then \( \mathcal{I}_s(a_1 \theta a_2) = \mathcal{I}_s(a_1) \theta \mathcal{I}_s(a_2) \).
Depending on the “type” of the a-terms the operator \( \theta \) may or may not be defined.
- if \( t \) is a t-term and \( s \) an s-term then \( \mathcal{I}_s(t \in s) = \mathcal{I}_s(t) \subseteq \mathcal{I}_s(s) \)
- if \( s_1 \) and \( s_2 \) both are t-, f-, af-, or s-terms and \( \theta \) is a set comparison symbol, then \( \mathcal{I}_s(s_1 \theta s_2) = \mathcal{I}_s(s_1) \theta \mathcal{I}_s(s_2) \)
- if \( q_1 \) and \( q_2 \) are predicates and \( \theta \) is a logical symbol, different from \( \neg \) then \( \mathcal{I}_s(q_1 \theta q_2) = \mathcal{I}_s(q_1) \theta \mathcal{I}_s(q_2) \)
- if \( q \) is a predicate then \( \mathcal{I}_s(\neg q) = \mathcal{I}_s(q) \)
- if \( X \) is a variable, \( q \) a predicate and \( s \) an s-term then \( \mathcal{I}_s(\forall X [s \mid q]) = \)
\( \top \) if for all \( y \in \mathcal{I}_s(s) \) we have \( \mathcal{I}_s(q_y^X) = \top \),
\( \bot \) if there exists a \( y \in \mathcal{I}_s(s) \), such that \( \mathcal{I}_s(q_y^X) = \bot \), and \( * \) otherwise.
\[ \mathcal{I}_s(\exists X [s \mid q]) = \]
\( \top \) if there exists a \( y \in \mathcal{I}_s(s) \) such that \( \mathcal{I}_s(q_y^X) = \top \),
\( \bot \) if for all \( y \in \mathcal{I}_s(s) \), we have \( \mathcal{I}_s(q_y^X) = \bot \), and \( * \) otherwise.
From a first order language, defined according to definition 3.1, we deduce the following 3 classes of constructs:
**Definition 3.3 Data Language**
Let \( \langle O, P, C, W \rangle, \langle E, A, V, I, H, T \rangle \) be a conceptual model and let \( L_F \) be a first order language, where the alphabet is extended with the symbols \(?\), \( \uparrow \), \( \downarrow \) and \( ; \). The data language \( L_D \) then contains \( L_F \) and the following elements:
i) **constraints**
Every predicate in \( L_F \), without free variables is a constraint. \( L_C \) is the sublanguage of \( L_F \), containing only constraints.
ii) **queries**
Let \( X \) be a variable, let \( s \) be an s-term and let \( q \) be a predicate with at most \( X \) as free variable, then \( ?[X : s \mid q] \) is a query.
Let \( t \) be a t-term with at most \( X \) as free variable, then \( ?[X : s \mid q \mid t] \) is also a query.
\( ?[X : s \mid q] \) is just a shorthand for \( ?[X : s \mid q \mid X] \). \( L_Q \) is the sublanguage of \( L_D \), containing only queries.
Intuitively the construct \( ?[X : s \mid q \mid t] \) is analogous to the SQL construct `select t from s where q`.
The construct \( ?[X : s \mid q] \) is simply `select * from s where q`.
iii) updates
Let \( o \in O, f \in F, s_1, s_2 \) be \( s \)-terms, \( f_1, f_2 \) be \( f \)-terms without free variables, such that \( s_1 \) and \( f_1 \) both are enumerated, then: \( o \uparrow s_1, o \downarrow s_2, f \uparrow f_1, f \downarrow f_2 \) are updates.
\( \uparrow \) indicates insertion, \( \downarrow \) indicates deletion. Since the insertion adds information, not yet present in the database, the added information must be enumerated.
If \( u_1 \) and \( u_2 \) are updates, then \( u_1; u_2 \) is also an update. \( L_U \) is the sublanguage of \( L_D \), containing only update-expressions of the form above. □
**Example 3.1** Recall the Student-Professor database given in Examples 2.1 and 2.2. Consider the following constraint: “A student who is also a professor cannot get a scholarship.” In our data language we can write this as:
\[
\forall[X: student \mid s-\text{isa}(X) \in \text{rng}(p-\text{isa}) \Rightarrow \neg(X \in \text{dom(scholarship)})]
\]
This constraint is described using only elements of the structure scheme. Now consider the following constraint: “If a person is both a professor and a student then his empno must be the same as his studno.”
\[
\forall[S: student \mid \forall[P: professor \mid s-\text{isa}(S) = p-\text{isa}(P) \Rightarrow P(\text{empno}) = S(\text{studno})]]
\]
Finally, consider the following query: “List the names of the professors, who are also a student and who (as a student) have a loan of at least 10.000, together with the amount of their loan.”
\[
?X[p: professor \leftarrow \text{dom}(\text{loan}) \mid X(\text{loan}) \geq 10.000 \mid X \uparrow \{\text{name}, \text{loan}\}]
\]
□
**Definition 3.4** *Restricted State Space*
Let \( \langle O, P, C, W, E, A, V, I, H, T \rangle \) be a conceptual model with first order language \( L_F \) and interpretation function \( I \) as in definition 3.2, and let \( SoC \subseteq L_C \) be a set of constraints then the *restricted state space* \( S_R \) is:
\[
S_R = \{ s \in S_f \mid \forall[q : SoC \mid I_s(q) = \top] \}
\]
□
### 4 Future Research
We have enriched the data language for functional data models using the relational representation which exists for every functional structure scheme.
We want to investigate the expressive power of the new data language more extensively. A comparison to more purely functional languages such as Daplex [5] and to relational languages such as the relational algebra, tuple calculus or SQL [4] will be carried out. Also, a comparison with the logic-based language COL should be performed, as one can also express queries on both functional and relational schemes in COL [1].
Apart from a theoretical comparison of the expressive power of these languages it is also important to verify whether using relational representations will enable us to find short and easy formulations of queries, updates or constraints that are very difficult to describe using only the functional or only the relational model.
References
In this series appeared:
<table>
<thead>
<tr>
<th>No.</th>
<th>Author(s)</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>85/01</td>
<td>R.H. Mak</td>
<td>The formal specification and derivation of CMOS-circuits.</td>
</tr>
<tr>
<td>85/02</td>
<td>W.M.C.J. van Overveld</td>
<td>On arithmetic operations with M-out-of-N-codes.</td>
</tr>
<tr>
<td>85/03</td>
<td>W.J.M. Lemmens</td>
<td>Use of a computer for evaluation of flow films.</td>
</tr>
<tr>
<td>85/04</td>
<td>T. Verhoeff, H.M.L.J. Schols</td>
<td>Delay insensitive directed trace structures satisfy the foam rubber wrapper postulate.</td>
</tr>
<tr>
<td>86/01</td>
<td>R. Koymans</td>
<td>Specifying message passing and real-time systems.</td>
</tr>
<tr>
<td>86/02</td>
<td>G.A. Bussing, K.M. van Hee, M. Voorhoeve</td>
<td>ELISA, A language for formal specification of information systems.</td>
</tr>
<tr>
<td>86/03</td>
<td>Rob Hoogerwoord</td>
<td>Some reflections on the implementation of trace structures.</td>
</tr>
<tr>
<td>86/04</td>
<td>G.J. Houben, J. Paredaens, K.M. van Hee</td>
<td>The partition of an information system in several systems.</td>
</tr>
<tr>
<td>86/05</td>
<td>J.L.G. Dietz, K.M. van Hee</td>
<td>A framework for the conceptual modeling of discrete dynamic systems.</td>
</tr>
<tr>
<td>86/06</td>
<td>Tom Verhoeff</td>
<td>Nondeterminism and divergence created by concealment in CSP.</td>
</tr>
<tr>
<td>86/07</td>
<td>R. Gerth, L. Shira</td>
<td>On proving communication closedness of distributed layers.</td>
</tr>
<tr>
<td>86/09</td>
<td>C. Huizing, R. Gerth, W.P. de Roever</td>
<td>Full abstraction of a real-time denotational semantics for an OCCAM-like language.</td>
</tr>
<tr>
<td>86/10</td>
<td>J. Hooman</td>
<td>A compositional proof theory for real-time distributed message passing.</td>
</tr>
<tr>
<td>86/11</td>
<td>W.P. de Roever</td>
<td>Questions to Robin Milner - A responder's commentary (IFIP86).</td>
</tr>
<tr>
<td>86/12</td>
<td>A. Boucher, R. Gerth</td>
<td>A timed failures model for extended communicating processes.</td>
</tr>
<tr>
<td>86/13</td>
<td>R. Gerth, W.P. de Roever</td>
<td>Proving monitors revisited: a first step towards verifying object oriented systems (Fund. Informatica</td>
</tr>
<tr>
<td>Date</td>
<td>Authors</td>
<td>Title</td>
</tr>
<tr>
<td>------</td>
<td>---------</td>
<td>-------</td>
</tr>
<tr>
<td>86/14</td>
<td>R. Koymans</td>
<td>Specifying passing systems requires extending temporal logic.</td>
</tr>
<tr>
<td>87/01</td>
<td>R. Gerth</td>
<td>On the existence of sound and complete axiomatizations of the monitor concept.</td>
</tr>
<tr>
<td>87/02</td>
<td>Simon J. Klaver, Chris F.M. Verberne</td>
<td>Federatieve Databases.</td>
</tr>
<tr>
<td>87/03</td>
<td>G.J. Houben, J. Paredaens</td>
<td>A formal approach to distributed information systems.</td>
</tr>
<tr>
<td>87/04</td>
<td>T. Verhoeff</td>
<td>Delay-insensitive codes - An overview.</td>
</tr>
<tr>
<td>87/05</td>
<td>R. Kuiper</td>
<td>Enforcing non-determinism via linear time temporal logic specification.</td>
</tr>
<tr>
<td>87/06</td>
<td>R. Koymans</td>
<td>Temporele logica specificatie van message passing en real-time systemen (in Dutch).</td>
</tr>
<tr>
<td>87/07</td>
<td>R. Koymans</td>
<td>Specifying message passing and real-time systems with real-time temporal logic.</td>
</tr>
<tr>
<td>87/08</td>
<td>H.M.J.L. Schols</td>
<td>The maximum number of states after projection.</td>
</tr>
<tr>
<td>87/10</td>
<td>T. Verhoeff</td>
<td>Three families of maximally nondeterministic automata.</td>
</tr>
<tr>
<td>87/11</td>
<td>P. Lemmens</td>
<td>Eldorado ins and outs. Specifications of a data base management toolkit according to the functional model.</td>
</tr>
<tr>
<td>87/12</td>
<td>K.M. van Hee and A. Lapinski</td>
<td>OR and AI approaches to decision support systems.</td>
</tr>
<tr>
<td>87/13</td>
<td>J.C.S.P. van der Woude</td>
<td>Playing with patterns - searching for strings.</td>
</tr>
<tr>
<td>87/14</td>
<td>J. Hooman</td>
<td>A compositional proof system for an occam-like real-time language.</td>
</tr>
<tr>
<td>87/16</td>
<td>H.M.M. ten Eikelder, J.C.F. Wilmont</td>
<td>Normal forms for a class of formulas.</td>
</tr>
<tr>
<td>87/17</td>
<td>K.M. van Hee, G.-J. Houben, J.L.G. Dietz</td>
<td>Modelling of discrete dynamic systems framework and examples.</td>
</tr>
</tbody>
</table>
87/18 C.W.A.M. van Overveld: An integer algorithm for rendering curved surfaces.
87/19 A.J. Seebregts: Optimalisering van file allocatie in gedistribueerde database systemen.
87/20 G.J. Houben and J. Paredaens: The R^2 -Algebra: An extension of an algebra for nested relations.
87/21 R. Gerth, M. Codish, Y. Lichtenstein, and E. Shapiro: Fully abstract denotational semantics for concurrent PROLOG.
88/01 T. Verhoeff: A Parallel Program That Generates the Möbius Sequence.
88/03 T. Verhoeff: Settling a Question about Pythagorean Triples.
88/06 H.M.J.L. Schols: Notes on Delay-Insensitive Communication.
88/10 J.C. Ebergen: A Formal Approach to Designing Delay Insensitive Circuits.
88/12 A.E. Eiben: Abstract theory of planning.
88/13 A. Bijlsma: A unified approach to sequences, bags, and trees.
<table>
<thead>
<tr>
<th>Reference</th>
<th>Authors</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>88/15</td>
<td>R. Bos C. Hemerik</td>
<td>An introduction to the category theoretic solution of recursive domain equations.</td>
</tr>
<tr>
<td>88/16</td>
<td>C. Hemerik J.P. Katoen</td>
<td>Bottom-up tree acceptors.</td>
</tr>
<tr>
<td>88/18</td>
<td>K.M. van Hee P.M.P. Rambags</td>
<td>Discrete event systems: concepts and basic results.</td>
</tr>
<tr>
<td>88/19</td>
<td>D.K. Hammer K.M. van Hee</td>
<td>Fasering en documentatie in software engineering.</td>
</tr>
<tr>
<td>88/20</td>
<td>K.M. van Hee L. Somers M. Voorhoeve</td>
<td>EXSPECT, the functional part.</td>
</tr>
<tr>
<td>89/1</td>
<td>E.Zs. Lepoeter-Molnar</td>
<td>Reconstruction of a 3-D surface from its normal vectors.</td>
</tr>
<tr>
<td>89/2</td>
<td>R.H. Mak P. Struik</td>
<td>A systolic design for dynamic programming.</td>
</tr>
<tr>
<td>89/3</td>
<td>H.M.M. Ten Eikelder C. Hemerik</td>
<td>Some category theoretical properties related to a model for a polymorphic lambda-calculus.</td>
</tr>
<tr>
<td>89/4</td>
<td>J. Zwiers W.P. de Roever</td>
<td>Compositionality and modularity in process specification and design: A trace-state based approach.</td>
</tr>
<tr>
<td>89/5</td>
<td>Wei Chen T. Verhoeff J.T. Udding</td>
<td>Networks of Communicating Processes and their (De-)Composition.</td>
</tr>
<tr>
<td>89/6</td>
<td>T. Verhoeff</td>
<td>Characterizations of Delay-Insensitive Communication Protocols.</td>
</tr>
<tr>
<td>89/7</td>
<td>P. Struik</td>
<td>A systematic design of a parallel program for Dirichlet convolution.</td>
</tr>
<tr>
<td>89/9</td>
<td>K.M. van Hee P.M.P. Rambags</td>
<td>Discrete event systems: Dynamic versus static topology.</td>
</tr>
<tr>
<td>89/10</td>
<td>S. Ramesh</td>
<td>A new efficient implementation of CSP with output guards.</td>
</tr>
<tr>
<td>89/11</td>
<td>S. Ramesh</td>
<td>Algebraic specification and implementation of infinite processes.</td>
</tr>
<tr>
<td>89/12</td>
<td>A.T.M. Aerts K.M. van Hee</td>
<td>A concise formal framework for data modeling.</td>
</tr>
</tbody>
</table>
89/13 A.T.M. Aerts
K.M. van Hee
M.W.H. Hesen
A program generator for simulated annealing problems.
89/14 H.C. Haesen
ELDA, data manipulatie taal.
89/15 J.S.C.P. van der Woude
Optimal segmentations.
89/16 A.T.M. Aerts
K.M. van Hee
Towards a framework for comparing data models.
89/17 M.J. van Diepen
K.M. van Hee
A formal semantics for Z and the link between Z and the relational algebra.
90/1 W.P. de Roever-H. Barringer
C.Courcoubetis-D. Gabbay
R. Gerth-B. Jonsson- A. Pnueli
M. Reed-J. Sifakis-J. Vytopil
P. Wolper
Formal methods and tools for the development of distributed and real time systems, pp. 17.
90/2 K.M. van Hee
P.M.P. Rambags
Dynamic process creation in high-level Petri nets, pp. 19.
90/3 R. Gerth
Foundations of Compositional Program Refinement - safety properties -, p. 38.
90/4 A. Peeters
Decomposition of delay-insensitive circuits, p. 25.
90/5 J.A. Brzozowski
J.C. Ebergen
On the delay-sensitivity of gate networks, p. 23.
90/6 A.J.J.M. Marcelis
90/7 A.J.J.M. Marcelis
A logic for one-pass, one-attributed grammars, p. 14.
90/8 M.B. Josephs
Receptive Process Theory, p. 16.
90/9 A.T.M. Aerts
P.M.E. De Bra
K.M. van Hee
Combining the functional and the relational model, p. 15.
90/10 M.J. van Diepen
K.M. van Hee
A formal semantics for Z and the link between Z and the relational algebra, p. 30. (Revised version of CSNotes 89/17).
90/11 P. America
F.S. de Boer
A proof system for process creation, p. 84.
90/12 P. America
F.S. de Boer
A proof theory for a sequential version of POOL, p. 110.
90/13 K.R. Apt
F.S. de Boer
E.R. Olderog
Proving termination of Parallel Programs, p. 7.
90/14 F.S. de Boer
A proof system for the language POOL, p. 70.
90/15 F.S. de Boer
Compositionality in the temporal logic of concurrent systems,
A fully abstract model for concurrent logic languages, p. 23.
On the asynchronous nature of communication in concurrent logic languages: a fully abstract model based on sequences, p. 29.
|
{"Source-Url": "https://pure.tue.nl/ws/files/1658186/8934324.pdf", "len_cl100k_base": 15373, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 65002, "total-output-tokens": 17366, "length": "2e13", "weborganizer": {"__label__adult": 0.00044655799865722656, "__label__art_design": 0.0010166168212890625, "__label__crime_law": 0.0004398822784423828, "__label__education_jobs": 0.00876617431640625, "__label__entertainment": 0.00019812583923339844, "__label__fashion_beauty": 0.00026297569274902344, "__label__finance_business": 0.0010595321655273438, "__label__food_dining": 0.0006194114685058594, "__label__games": 0.0008835792541503906, "__label__hardware": 0.0012559890747070312, "__label__health": 0.0011186599731445312, "__label__history": 0.0006866455078125, "__label__home_hobbies": 0.0003116130828857422, "__label__industrial": 0.0009593963623046876, "__label__literature": 0.001617431640625, "__label__politics": 0.00041365623474121094, "__label__religion": 0.000903606414794922, "__label__science_tech": 0.427490234375, "__label__social_life": 0.00029397010803222656, "__label__software": 0.01161956787109375, "__label__software_dev": 0.5380859375, "__label__sports_fitness": 0.0002617835998535156, "__label__transportation": 0.000934600830078125, "__label__travel": 0.0002474784851074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51493, 0.0283]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51493, 0.70412]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51493, 0.7553]], "google_gemma-3-12b-it_contains_pii": [[0, 2122, false], [2122, 2240, null], [2240, 2601, null], [2601, 3677, null], [3677, 6472, null], [6472, 10075, null], [10075, 12173, null], [12173, 14691, null], [14691, 17748, null], [17748, 20638, null], [20638, 22640, null], [22640, 25221, null], [25221, 28126, null], [28126, 31443, null], [31443, 34804, null], [34804, 37815, null], [37815, 40803, null], [40803, 41612, null], [41612, 43856, null], [43856, 45675, null], [45675, 47320, null], [47320, 49198, null], [49198, 51306, null], [51306, 51493, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2122, true], [2122, 2240, null], [2240, 2601, null], [2601, 3677, null], [3677, 6472, null], [6472, 10075, null], [10075, 12173, null], [12173, 14691, null], [14691, 17748, null], [17748, 20638, null], [20638, 22640, null], [22640, 25221, null], [25221, 28126, null], [28126, 31443, null], [31443, 34804, null], [34804, 37815, null], [37815, 40803, null], [40803, 41612, null], [41612, 43856, null], [43856, 45675, null], [45675, 47320, null], [47320, 49198, null], [49198, 51306, null], [51306, 51493, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51493, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51493, null]], "pdf_page_numbers": [[0, 2122, 1], [2122, 2240, 2], [2240, 2601, 3], [2601, 3677, 4], [3677, 6472, 5], [6472, 10075, 6], [10075, 12173, 7], [12173, 14691, 8], [14691, 17748, 9], [17748, 20638, 10], [20638, 22640, 11], [22640, 25221, 12], [25221, 28126, 13], [28126, 31443, 14], [31443, 34804, 15], [34804, 37815, 16], [37815, 40803, 17], [40803, 41612, 18], [41612, 43856, 19], [43856, 45675, 20], [45675, 47320, 21], [47320, 49198, 22], [49198, 51306, 23], [51306, 51493, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51493, 0.1166]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
766790fefcf39dd72cfbc3cf07691b7dc2021831
|
[REMOVED]
|
{"Source-Url": "https://s3-eu-west-1.amazonaws.com/pstorage-leicester-213265548798/18217595/main.pdf", "len_cl100k_base": 14624, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 74016, "total-output-tokens": 17114, "length": "2e13", "weborganizer": {"__label__adult": 0.0004854202270507813, "__label__art_design": 0.0008826255798339844, "__label__crime_law": 0.0005288124084472656, "__label__education_jobs": 0.0020771026611328125, "__label__entertainment": 0.0001920461654663086, "__label__fashion_beauty": 0.0002808570861816406, "__label__finance_business": 0.00046181678771972656, "__label__food_dining": 0.0006089210510253906, "__label__games": 0.002109527587890625, "__label__hardware": 0.00235748291015625, "__label__health": 0.0011625289916992188, "__label__history": 0.0006237030029296875, "__label__home_hobbies": 0.000270843505859375, "__label__industrial": 0.0010747909545898438, "__label__literature": 0.0007119178771972656, "__label__politics": 0.0005078315734863281, "__label__religion": 0.0008549690246582031, "__label__science_tech": 0.415283203125, "__label__social_life": 0.00013768672943115234, "__label__software": 0.006725311279296875, "__label__software_dev": 0.56103515625, "__label__sports_fitness": 0.0004472732543945313, "__label__transportation": 0.0011920928955078125, "__label__travel": 0.00025916099548339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54268, 0.03463]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54268, 0.51178]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54268, 0.81079]], "google_gemma-3-12b-it_contains_pii": [[0, 2709, false], [2709, 6307, null], [6307, 10782, null], [10782, 14789, null], [14789, 18440, null], [18440, 21759, null], [21759, 25147, null], [25147, 28543, null], [28543, 32176, null], [32176, 35505, null], [35505, 39254, null], [39254, 42474, null], [42474, 45980, null], [45980, 48697, null], [48697, 51171, null], [51171, 54268, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2709, true], [2709, 6307, null], [6307, 10782, null], [10782, 14789, null], [14789, 18440, null], [18440, 21759, null], [21759, 25147, null], [25147, 28543, null], [28543, 32176, null], [32176, 35505, null], [35505, 39254, null], [39254, 42474, null], [42474, 45980, null], [45980, 48697, null], [48697, 51171, null], [51171, 54268, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54268, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54268, null]], "pdf_page_numbers": [[0, 2709, 1], [2709, 6307, 2], [6307, 10782, 3], [10782, 14789, 4], [14789, 18440, 5], [18440, 21759, 6], [21759, 25147, 7], [25147, 28543, 8], [28543, 32176, 9], [32176, 35505, 10], [35505, 39254, 11], [39254, 42474, 12], [42474, 45980, 13], [45980, 48697, 14], [48697, 51171, 15], [51171, 54268, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54268, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
97809a80b8e40c1eeb8e305bb9c4cff5bbfb558d
|
Legate Sparse: Distributed Sparse Computing in Python
Rohan Yadav
rohany@cs.stanford.edu
Stanford University
USA
Wonchan Lee
wonchanl@nvidia.com
NVIDIA
USA
Melih Elibol
melibol@nvidia.com
NVIDIA
USA
Taylor Lee Patti
tpatti@nvidia.com
NVIDIA
USA
Manolis Papadakis
mpapadakis@nvidia.com
NVIDIA
USA
Michael Garland
mgarland@nvidia.com
NVIDIA
USA
Alex Aiken
aiken@cs.stanford.edu
Stanford University
USA
Fredrik Kjolstad
kjolstad@cs.stanford.edu
Stanford University
USA
Michael Bauer
mbauer@nvidia.com
NVIDIA
USA
ABSTRACT
The sparse module of the popular SciPy Python library is widely used across applications in scientific computing, data analysis and machine learning. The standard implementation of SciPy is restricted to a single CPU and cannot take advantage of modern distributed and accelerated computing resources. We introduce Legate Sparse, a system that transparently distributes and accelerates unmodified sparse matrix-based SciPy programs across clusters of CPUs and GPUs, and composes with cuNumeric, a distributed NumPy library. Legate Sparse uses a combination of static and dynamic techniques to efficiently compose independently written sparse and dense array programming libraries, providing a unified Python interface for distributed sparse and dense array computations. We show that Legate Sparse is competitive with single-GPU libraries like CuPy and achieves 65% of the performance of PETSc on up to 1280 CPU cores and 192 GPUs of the Summit supercomputer, while offering the productivity benefits of idiomatic SciPy and NumPy.
ACM Reference Format:
1 INTRODUCTION
Python is a widely used language for data science, machine learning, and scientific computing due to its ease of use and large ecosystem of numerical libraries. This ecosystem includes NumPy [13] for dense array-based computations and SciPy’s [35] Sparse module for sparse matrix-based computations, both of which serve as foundations for numerous applications and frameworks. Despite their widespread use, the canonical implementations of NumPy and SciPy target a single CPU node, with only select operations supporting multiple threads. As data set sizes and application computational demands continue to increase, there is a need to target resources more powerful than what a single CPU-only node can provide. Recent work has made great strides in this area for dense array programming systems [5, 22, 24, 29], but the automatic distribution and acceleration of SciPy-based sparse matrix programs has not yet been achieved.
SciPy or CuPy [22] (a single-GPU implementation of NumPy and SciPy) can be paired with a communication library like MPI or NCCL, or a task-based library like Dask Distributed [29] or Ray [21] to enable distributed execution. However, this composition requires the user to manually partition and communicate data, resulting in non-trivial code modification and necessitating distributed programming expertise. The industry-standard sparse linear algebra systems PETSc [3, 20] and Trilinos [34] expose Python wrappers around their low-level C/C++ APIs. While these APIs provide many high-level sparse matrix operations, they require programmers to reason about data distribution and data movement, a level of expertise many programmers do not have.
Our goal in this work is to develop a system that scales unmodified SciPy Sparse programs across distributed machines with good performance, and efficiently composes with cuNumeric [5], a distributed NumPy library. This system would provide the familiar dense and sparse array programming interfaces to allow users with and without expertise in distributed programming to rapidly prototype distributed applications and scale these applications to the size of machine needed to process their datasets. In this paper, we explore the large design space encompassed by these constraints, and demonstrate one design point that achieves our goals.
Achieving our goal of building a distributed and heterogeneous sparse array programming library that achieves both high performance as well as composability with an external dense array programming library requires solving a global problem of performance composability at multiple layers of the software stack. First,
unlike when developing a monolithic distributed library, operations launched by each separate library should compose with operations launched by other libraries at the distributed layer. This means that both libraries must share distributed data representations efficiently and perform only necessary synchronization and communication. Second, each library’s operations must agree on the processor varieties targeted in a heterogeneous system. For example, if even one operation issued by a user’s program does not have a GPU implementation, the data movement caused by falling back to a CPU implementation can significantly impact performance. Third, library operations must agree on the types of their data structures, especially when those data structures are sparse. When operations are not implemented on the program’s requested sparse data structures, expensive format conversions to supported data structures can dominate program execution time and increase memory usage.
To compose at these three layers of the software stack, the implementation of each library needs to be flexible at each layer. At the distributed layer, each library must be flexible with the partitioning schemes used for individual operations to compose with how operations launched by the other library may partition data. And at the layers of processor varieties and types of data structures, the sparse library must support a myriad of variants for each potential such combination, which also need be specialized to the chosen partitioning schemes.
We implement the flexibility and generality required of a distributed sparse array programming library that composes with cuNumeric through a careful separation of decisions made statically (during the implementation of Legate Sparse) and dynamically (during the execution of Legate Sparse programs). The combination of static and dynamic decisions is key to a successful implementation of Legate Sparse: we believe that a fully static approach is likely to sacrifice composability or implementation maintainability, while a fully dynamic approach is likely to sacrifice performance. Some decisions must be made statically to specialize kernels to processor kinds and sparse data structures, while other decisions must be postponed until runtime, where the specific interactions between different libraries are known. We divide the space of static and dynamic decisions in the implementation of Legate Sparse with the following key ideas:
- **Composable Distribution.** To compose distributed operations across libraries, we combine a constraint-based description of data distributions with a first-class representation of data partitions. This combinations allows for the compact encoding of potential distribution strategies for each operation statically. Then, we defer the decision of what concrete data partitions to use for each operation until runtime. We dynamically select partitions that satisfy the distribution constraints and align with existing data distributions, allowing (to the extent possible) for data to be operated on where it exists in the machine.
- **Compiler-Aided Kernel Generation.** To implement the large number of variants required to run each operation with each sparse data structure on each heterogeneous processor kind, we leverage the DISTAL [36, 37] sparse tensor algebra compiler. We utilized DISTAL to ahead-of-time generate distributed kernels specialized to each data format, processor variety and partitioning scheme, enabling Legate Sparse to dynamically dispatch across a wide set of statically-generated specialized kernels.
- **Dynamic Dependence and Communication Analysis.** Dynamic dependence and communication analyses enable independent libraries to launch work while ensuring precise synchronization and communication between the libraries. We implement Legate Sparse through a translation to the programming model of the Legion [6] runtime system, and leverage it to overlap computation and perform precise, data-dependent communication across library boundaries.
We have developed a prototype implementation of Legate Sparse that composes with cuNumeric to distribute and accelerate unmodified Python programs that use NumPy and SciPy, such as the eigenvalue estimation computation in Figure 1. Legate Sparse achieves the transparent distribution of SciPy and NumPy programs in a maintainable way: the implementations of Legate Sparse and cuNumeric are each unaware of the other library’s implementation, and a large number of the kernels in Legate Sparse were automatically generated. Our prototype implements 35% of the SciPy Sparse API, which is sufficient to express complex computations from scientific computing and machine learning.
We evaluate the performance of Legate Sparse on the Summit supercomputer with a combination of NumPy and SciPy based workloads with varying complexity, ranging from 10 to 1000 lines of code. Our benchmark suite contains iterative linear solvers (conjugate gradient, geometric multi-grid), Runge-Kutta integration, and sparse matrix factorization. Our experiments show that, for a drop-in NumPy and SciPy replacement, Legate Sparse achieves good scalability to 1280 CPUs and 192 GPUs, achieving 65% of the performance of PETSc. We also show that Legate Sparse delivers comparable performance with CuPy on a single GPU and outperforms SciPy on a single CPU socket, while effectively scaling to larger numbers of processors.
```python
1 # and fall back to NumPy and SciPy if not present.
2 try:
3 import cuNumeric as np
4 except ImportError:
5 import legate.sparse as sp
6 except ImportErrors:
7 import numpy as np
8 import scipy.sparse as sp
9
t # Generate a random sparse matrix.
10 A = sp.randn(n, n, format='csr')
11 # Make a positive semi-definite matrix from A.
12 A = 0.5 * (A + A.T) + n * sp.eye(n)
13 # Estimate the maximum eigen-value via the Raleigh quotient.
14 x = np.random.rand(A.shape[0])
15 for _ in range(10):
16 x = A @ x
17 x /= np.linalg.norm(x)
18 result = np.dot(x.T, A @ x)
```
Figure 1: Legate Sparse and cuNumeric program that runs on a GPU cluster and falls back to SciPy and NumPy.
2 BACKGROUND
In this section, we provide background on the SciPy Sparse module, and discuss components of the library relevant to this work. We then provide background on the Legion [6] runtime system, which both Legate Sparse and cuNumeric are built upon. Finally, we discuss how cuNumeric maps onto Legion’s abstractions.
2.1 SciPy Sparse
SciPy Sparse [2, 35] is a sub-module of the SciPy Python library that provides a high-level API for linear algebra operations over different types of sparse matrices. SciPy Sparse supports several common sparse matrix formats, including the CSR (compressed sparse rows), CSC (compressed sparse columns), DIA (diagonal) and COO (coordinate) formats, and supports format conversions and data reorganization operations between these formats. On these sparse matrices, SciPy Sparse supports a variety of basic mathematical operations, such as matrix-vector products, matrix-matrix products and diagonal computations, as well as higher-level linear algebra operations like iterative solves and eigenvalue computations. SciPy Sparse is directly composable with NumPy, as many operations within the API natively accept and return NumPy arrays. The standard implementation of SciPy implements the API with a combination of calling out to C operations and utilizing existing NumPy routines. Finally, the SciPy Sparse API has no notions of execution or distribution strategies, meaning that all parallelism performed by Legate Sparse must be implicit.
2.2 Legion
Legion [6] is a data-centric, task-based runtime system organized around region data structures that are partitioned into sub-regions and computed on by user-defined tasks. Legion performs dynamic analysis to extract parallelism from sequential user programs, represented as streams of tasks, by identifying tasks within the stream that operate on independent regions, and automatically inserting the necessary communication and synchronization to preserve the sequential semantics of the input program.
Data Model. All long-lived and distributed data is described through regions, which are multi-dimensional arrays. Regions are the underlying data structures that back both cuNumeric’s distributed arrays and Legate Sparse’s sparse matrices. Parallelism in Legion is expressed through the partitioning of regions into sub-regions. A partition \( P \) of a region \( R \) is a first-class object that represents the mapping from a set of colors to subsets of the indices of \( R \). Partitions need not be disjoint, nor do they need to cover the whole index space: the sets of indices in \( P \) can overlap or alias, and their union does not need to cover all indices in \( R \).
Legion supports dependent partitioning [33] operations for creating partitions from existing partitions. The most important dependent partitioning operation to this work is image, which operates on a source region that contains indices pointing into a destination region, and projects a partition of the source region onto the destination. Intuitively, given a partition of the source region, the image operation colors all indices in the destination region with the same color as the partition of each index in the source region. More precisely, consider a source region \( S \) and a destination region \( D \), where elements in \( S \) are sets of indices in \( D \). Given a partition \( P \) of \( S \), the image of \( S \) to \( D \) is a partition \( P' \) of \( D \) such that \( \forall c \in P, \forall i \in P[c], S[i] \subseteq P'[c] \).
The image operation is demonstrated in Figure 2, where Figure 2a shows an image from a source region that contains ranges of indices, and Figure 2b shows a source region that names individual indices. Note that the partition of \( D \) created by the image in Figure 2b is aliased, because the 1st and 3rd elements of \( D \) are included in two sub-regions each. Image is a powerful operator that allows us to express co-partitioning of the indexing arrays that are often used to represent sparse matrices, and to capture the data-dependent communication patterns that arise in sparse computations.
Tasks. Tasks are the atomic unit of computation in Legion. Tasks are arbitrary, user-defined computations that operate on regions, and declare how they will use each region (read, write or reduce). Legion extracts dependencies between tasks, and inserts communication operations for the regions on which a task operates.
Mapping. Legion programs do not directly encode machine-specific decisions such as on what processors tasks should run and in what memories regions should be allocated. Instead, a separate mapper makes these decisions for the application at runtime, allowing the application to remain unchanged when porting to a new machine or during certain kinds of performance tuning.
2.3 cuNumeric
cuNumeric [5] is a distributed and accelerated drop-in replacement for NumPy. Like Legate Sparse, cuNumeric is implemented via a dynamic translation from the NumPy API to Legion. cuNumeric represents NumPy arrays as Legion’s regions, partitions the regions for parallel processing, and launches tasks corresponding to NumPy operations. The original version of cuNumeric as described by Bauer et al. [5] selects partitions of regions for individual operations by maintaining a key partition for each region that tracks the latest written partition of the region. NumPy operations between multiple arrays choose partitions of arrays that keep the key partition of the largest region involved in an operation in place. Additionally, the original version of cuNumeric employs a specialized mapper that encodes heuristics for mapping NumPy programs, such as choosing when to distribute tasks, selecting the tile size to partition data into, and the mapping of tasks and regions onto processor and memories. To enable composability with Legate Sparse, we modify the partitioning strategies within cuNumeric to use the constraint-based system discussed in Section 4.1, and introduce composition-aware mapping algorithms to cuNumeric’s mapper, as discussed in Section 4.2.
Figure 2: Visualization of the image partitioning operation.
with referenced indices in dense vectors and matrices. For example, (compressed sparse rows) format further compresses the COO format: it maintains an array (often called \( \text{pos} \) and \( \text{indptr} \)) where the column coordinates and values for row \( i \) are stored within range \( [\text{pos}[i], \text{pos}[i+1]) \) of an array called \( \text{crd} \). The CSC format is similar to CSR, but compresses the columns instead of the rows.
We use Legion’s regions to extend these single-node representations into distributed sparse matrix data structures by mapping each of the arrays used to represent sparse matrices to regions. For instance, the row, column and value arrays in the COO format are represented directly as regions in Legate Sparse. Formats such as CSR and CSC are represented in a similar manner, but store the range of coordinates for a row or column \( i \) in a tuple at \( \text{pos}[i] \), as depicted in Figure 3. This small variation from the standard representation allows us to directly employ Legion’s image partitioning operation to relate partitions of the \( \text{pos} \) and \( \text{crd} \) regions with one another. We also use images to relate partitions of the \( \text{crd} \) region with referenced indices in dense vectors and matrices. For example, consider a distributed SpMV (\( y = A \cdot x \)), where \( A \) is stored as CSR. Performing an SpMV requires accessing the locations in \( x \) corresponding to the non-zero coordinates stored in \( A \)’s \( \text{crd} \) region. We use an image from the partition of \( A \)’s \( \text{crd} \) region to compute the referenced locations of \( x \). An example of this operation is discussed in Figure 5 in Section 4.3. Images allow for the co-partitioning of the regions used to define sparse data structures, and to implement MPI-like scatter/gather operations in a high-level manner.
Our decision to represent sparse matrices as a set of regions instead of a collection of local sparse matrices per rank (as used by PETSc and Trilinos) has both benefits and downsides. Using a set of regions aligns more closely with the Legion programming model that we target, and careful choices of partitioning enables description of non-trivial communication patterns. Additionally, this choice allows for interoperation with Legate Sparse: since sparse matrices are constructed from regions, users can directly construct sparse matrices out of cuNumeric arrays, or extract and operate on the arrays that back a sparse matrix. A downside of this decision is that the partitioned pieces of the global sparse matrix passed to individual tasks are not valid sparse matrices from the perspective of external libraries like cuSPARSE. As a result, we pay a small performance penalty when reshaping these local pieces into formats accepted by these libraries when we use them. Our evaluation (Section 6) shows that our sparse matrix representation has low overhead while allowing for direct use Legion’s API and close alignment with SciPy Sparse’s programming model.
### 3 SPARSE DATA REPRESENTATION
The standard single-node representations of common sparse matrix formats store metadata about the indices of non-zero matrix entries and their values in packs of arrays. For example, the COO (coordinate) format stores three arrays, where the first two arrays store the row coordinate and column coordinate of each non-zero entry of the matrix, and the last array stores the values. The CSR (compressed sparse rows) format further compresses the COO format by implicitly representing the rows that contain non-zero entries: it maintains an array (often called \( \text{pos} \) and \( \text{indptr} \)) where the column coordinates and values for row \( i \) are stored within range \( [\text{pos}[i], \text{pos}[i+1]) \) of an array called \( \text{crd} \). The CSC format is similar to CSR, but compresses the columns instead of the rows.
### 4 COMPOSABLE PARALLELIZATION
Our goal with Legate Sparse was to distribute and accelerate SciPy SpMV workloads while efficiently composing with cuNumeric. In this section, we describe the techniques employed to enable the performant composition of Legate Sparse with cuNumeric at the distributed layer of each library. We describe how Legate Sparse and cuNumeric partition the previously discussed sparse and dense data structures, and how each library launches parallel operations over the partitioned data using Legion. We show how abstractions built on top of Legion’s partitions enable concise and composable parallel implementations of distributed operations, and describe how to map these operations onto physical hardware in a composable manner.
#### 4.1 Constraint-Based Parallelization
Legate Sparse and cuNumeric distribute SciPy and NumPy programs by translating each operation into a set of task launches over partitioned regions. The selection of what partitions to use for each task launch has a significant impact on performance. For example, if two tasks \( t_1 \) and \( t_2 \) operate sequentially on a dense matrix \( M \), \( t_1 \) selects a row-wise partition of \( M \), and \( t_2 \) selects a column-wise partition of \( M \). Then a distributed transpose must be performed after \( t_1 \) has completed to put the data in the required distributed layout for \( t_2 \). To be performance-composable across operations, we need to re-use existing partitions whenever possible. However, to ensure the implementation maintainable, we do not want every operation to have to explicitly consider all possible partitions of every input.
We resolve this tension by leveraging recent work in constraint-based automatic parallelization, introduced by Lee et al. [17]. We add a layer of indirection to task definitions and launches where, instead of describing the exact partitions that tasks should operate on, tasks describe what regions they will operate on and constraints on how those regions should be partitioned. Constraints can be simple, such as declaring that two regions must have aligned partitions (for an element-wise operation), or complex constraints that invoke dependent partitioning operations (such as relating the \( \text{pos} \) and \( \text{crd} \) regions in a CSR matrix by an image). We use a constraint solver inspired by Lee et al. [17] to select concrete partitions of each region that satisfy all of the declared constraints. The constraints are designed such that there is always at least one solution; if more than one solution is possible, the solver picks the solution that re-partitions the least amount of data. We refer to Lee et al. for a formal discussion of the constraint language and solving process.
We now discuss an example of the task launching process with constraints using the row-based distributed SpMV example in Figure 4. Upon execution of the task launching code in Figure 4, the task object contains the following partitioning constraints: \( \text{equals}(y, \text{pos}) \), \( \text{image}(\text{pos}, \text{crd}) \), \( \text{image}(\text{pos}, \text{vals}) \), and \( \text{image}(\text{crd}, x) \). Intuitively, these constraints mean the following: 1) the same partition must be selected for \( y \) and \( \text{pos} \), 2) the partitions of \( \text{crd} \) and \( \text{vals} \) must be the result of an image from the
Figure 3: Legate Sparse’s CSR sparse matrix encoding.
which processor each task should run on, and in which memory to store on their local node before making new allocations.
Because interactions between libraries are unknown until execution, allocations of distributed and partitioned data between libraries are not made in a coordinated manner between Legate Sparse and cuNumeric. Performance degradation can occur due to unnecessary data movement. While the library implementations of Legate Sparse and cuNumeric are independent, we introduce a point of coupling at the runtime layer between the libraries by sharing mapper infrastructure and mapping policies between the two.
Legate Sparse and cuNumeric use the same strategy for mapping tasks to processors to ensure a consistent assignment. A consistent processor mapping strategy ensures that data does not thrash between operations launched by different libraries. For example, if Legate Sparse and cuNumeric launch an element-wise operation in series, using the same processor mapping strategy ensures no data movement occurs between the operations.
The more difficult aspect of composing mapping decisions across libraries is the mapping of regions onto memories in the machine. The key challenge involved in mapping regions is how to share allocations of distributed and partitioned data between libraries. Because interactions between libraries are unknown until execution, the partitions of regions created by libraries and the aliasing of those partitions are also not known until program execution. To minimize data movement and memory usage, the mapping strategy used by Legate Sparse and cuNumeric must reuse and resize region allocations across individual operations and library boundaries. We facilitate the reuse of region allocations by having the mappers for Legate Sparse and cuNumeric record all region allocations made by Legate Sparse and cuNumeric. Figure 5 shows how the sparse matrix-vector multiplication (SpMV) launched by Legate Sparse and the norm and division operations launched by cuNumeric share partitions and physical resources. The top half of the figure shows
```python
1 def spmv(self, A, x):
2 # Compute y = A @ x and return y.
3 y = cunumeric.zeros(A.shape[0])
4 task = ctx.create_task(ROW_SPLIT_SPMV)
5 # Add all regions to the task.
6 task.add_output(y)
7 task.add_input(A.pos, A.crd, A.vals, x)
8 # Describe partitioning constraints.
9 task.add_alignment_constraint(y, A.pos)
10 task.add_image_constraint(A.pos, A.crd, A.vals)
11 task.add_image_constraint(A.crd, A.vals)
12 task.execute()
13 return y
```
Figure 4: Python implementation of a row-based distributed CSR SpMV (adapted from DISTAL generated code).
selected partition of pos, and 3) the partition of x must be the result of an image from the selected partition of crd. The constraint solver realizes that the choices of partitions for y and pos are independent, while the partitions for crd, vals and x are dependent on choices for partitions for other regions. Then, the solver examines the existing partitions for y and pos and selects the existing partitions if they are aligned. Otherwise, it selects an existing partition that keeps the sparse matrix in place. Once these initial partitions have been selected, the solver uses Legion’s image operation to construct partitions of crd, vals and x to satisfy the remaining constraints.
We developed Legate Sparse using the constraint system, and adapted the implementation of cuNumeric to use the same system. The constraint-based design is key to achieving performance composability at the distributed layer of our system for two reasons:
- **Partition reuse.** The constraint formulation enables reusing partitions across individual operations and libraries. Operations defined by Legate Sparse can consume partitions created by cuNumeric and vice-versa, avoiding unnecessary data movement when passing data between Legate Sparse and cuNumeric.
- **Localization of operation definitions.** Because each task only describes what partitions are possible to use, each task is defined independently. Existing operation implementations do not need to consider partitioning strategies defined in the future, and new operation implementations do not need to consider all possible existing partitioning strategies. The most important outcome of this design is that the cuNumeric and Legate Sparse implementations are completely unaware of the other. The lack of coupling streamlines development and is promising for the development of future libraries using the same strategy.
### 4.2 Composable Mapping
By representing sparse and dense arrays with regions and launching tasks using constraint-based parallelism, Legate Sparse and cuNumeric issue a stream of tasks to Legion. To execute this stream of tasks, Legate Sparse and cuNumeric must instruct Legion on which processor each task should run on, and in which memory to allocate each region of each task. Legate Sparse and cuNumeric communicate these decisions through separate mapper objects, which Legion queries before executing tasks. Proper mapping decisions are key to achieving high performance — if mapping decisions are not made in a coordinated manner between Legate Sparse and cuNumeric, performance degradation can occur due to unnecessary data movement. While the library implementations of Legate Sparse and cuNumeric are independent, we introduce a point of coupling at the runtime layer between the libraries by sharing mapper infrastructure and mapping policies between the two.
Legate Sparse and cuNumeric use the same strategy for mapping tasks to processors to ensure a consistent assignment. A consistent processor mapping strategy ensures that data does not thrash between operations launched by different libraries. For example, if Legate Sparse and cuNumeric launch an element-wise operation in series, using the same processor mapping strategy ensures no data movement occurs between the operations.
The more difficult aspect of composing mapping decisions across libraries is the mapping of regions onto memories in the machine. The key challenge involved in mapping regions is how to share allocations of distributed and partitioned data between libraries. Because interactions between libraries are unknown until execution, the partitions of regions created by libraries and the aliasing of those partitions are also not known until program execution. To minimize data movement and memory usage, the mapping strategy used by Legate Sparse and cuNumeric must reuse and resize region allocations across individual operations and library boundaries. We facilitate the reuse of region allocations by having the mappers for Legate Sparse and cuNumeric record all region allocations made in a shared store on each node, and having the mappers query the store on their local node before making new allocations.
However, reusing allocations that exactly match the extents of a region is not sufficient to achieve good performance, as tasks from different libraries may use multiple views of the same underlying region. As an example, consider a stencil computation, where a task reads from multiple tiles offset around a center tile. An efficient mapping for this computation would coalesce all offset tiles with the center tile into a single, larger allocation, reducing the total amount of memory and increasing cache locality.
To efficiently map multiple sub-regions into a single allocation, our shared mapping strategy employs a coalescing step before performing allocations. When selecting an allocation for a region, mappers examine the existing allocations for other sub-regions of the same parent region. If another sub-region has an intersection with the region being allocated, then the mapper has an option of merging the two views into a new, larger allocation with enough space for both regions. Tasks using the larger allocation then operate on slices of the allocation corresponding to the desired sub-region. We use a heuristic to drive coalescing decisions, where sub-regions are coalesced if the size of their overlapping components is sufficiently larger than their non-overlapping components. The coalescing step is key to reducing overall memory usage and eliminating redundant data movement. A concrete example is discussed in Section 4.3.
### 4.3 Execution Example
Figure 5 depicts the execution of the program displayed in Figure 1 with Legate Sparse and cuNumeric. Figure 5 shows how the sparse matrix-vector multiplication (SpMV) launched by Legate Sparse and the norm and division operations launched by cuNumeric share partitions and physical resources. The top half of the figure shows
Figure 5: Execution of the program in Figure 1, with control flowing between Legate Sparse and cuNumeric. The left part of the figure contains an excerpt of the program and an example matrix A and its data layout. The right part of the figure is the execution; the top half depicts partitioning and launching of tasks in Legate, while the bottom half shows the Legion-level execution on the physical machine. In the right part of the figure, each region entry is labeled with the coordinate of the entry.
The execution of the Python Legate task launching logic, and the bottom half shows the physical execution with Legion on a 2 GPU system. An efficient implementation only performs one element halo exchanges of the x vector, and no other copies. We show how Legate Sparse and cuNumeric interoperate to achieve this strategy.
Throughout the figure, we refer to the versions of the vector x at each iteration i of the main loop with $x_i$. For example, the initial vector x is denoted $x_0$, and the resulting x after the first $x = A \odot x$ operation is denoted as $x_1$. Next, all region entries in the figure are labeled with the coordinate of that entry within the region.
We first discuss the top half of the figure, which shows how Legate Sparse maps arrays onto regions and partitions these regions for distributed execution. The matrix A is organized in CSR as three separate regions, pos, crd, and vals, as discussed in Section 3. The original vector $x_0$ is represented by a one-dimensional region. When the program launches the first SpMV, Legate Sparse creates a new region for the output vector $x_1$. Solving the constraints for SpMV described in Figure 4, Legate Sparse selects an aligned tiling of $x_1$ and pos. To satisfy the image constraints, Legate Sparse invokes Legion’s image operation to create partitions of crd, vals, and $x_0$ from the tiling. We use blue and red colors to show the resulting partitions of each region. Note how the image from crd into $x_0$ creates an aliased (colored blue and red) partition. Legate Sparse launches SpMV tasks over the partitions, dispatching the tasks to Legion. Next, control flows to cuNumeric for the norm and division operations, which we treat as a single operation for illustration purposes. These are element-wise operations without partitioning constraints, so cuNumeric selects the tiling of $x_1$ created by Legate Sparse. After cuNumeric launches the norm and division tasks, the loop repeats, and all partitions are reused by future iterations.
We now shift to the bottom half of the figure, which depicts the execution with Legion, and the mapping of logical operations onto physical resources. For all tasks launched, the Legate Sparse and cuNumeric mappers assign tasks and regions to each GPU and the corresponding framebuffer memory. The key to peak performance in this program is the mapper’s choice of allocations for each region.
In the first iteration, the Legate Sparse and cuNumeric mappers make region allocations that correspond to the bounds of each region. The choices made in the second iteration of the program stress the importance of the compositional-awareness of the Legate Sparse and cuNumeric mapping strategies. When mapping the second SpMV operation, the mapper chooses new allocations (RA5 and RA6) for each piece of $x_1$, resizing the allocations RA1 and RA3 to account for the larger slice of $x_1$ required by each SpMV task. Resizing RA1 and RA3 requires a full copy of $x_1$, and a single element halo-copy between GPUs. Next, Legate Sparse sees that $x_0$ has gone out of scope, and chooses to reuse the allocations RA2 and RA4 by coalescing them into the requested sub-regions for $x_2$. Since
we used the DISTAL compiler to generate implementations for the SciPy Sparse API. Our prototype supports the COO, CSR, CSC and DIA sparse matrix formats, and of the estimated 492 functions in SciPy Sparse, our prototype implements 176 (35%) functions; 14 were implemented by using the DISTAL compiler, 156 were ported from existing SciPy or CuPy implementations, and 6 had to be handwritten. In this section, we discuss these three cases, as well as the portions of the API that we have not yet implemented.
5.1 Generating Kernels with DISTAL
We used the DISTAL [36, 37] compiler to generate implementations for components of the SciPy Sparse API that perform tensor algebraic computations. These functions are performance critical (such as SpMV or SpMM), and require custom code tailored to the specific operation, sparse matrix formats and target hardware. This custom code is tedious and difficult to write; despite DISTAL being used to generate implementations of only 14 functions in the SciPy Sparse API, the generated code accounts for 46% of the total C++ and CUDA in Legate Sparse (2854/6135 LOC) and 12% of the total Python in Legate Sparse (697/5748 LOC). By generating this performance sensitive code, we enhance the maintainability of Legate Sparse, and allow developer time to be spent elsewhere optimizing the library. We give an overview of DISTAL, and how it was used to generate code for Legate Sparse.
DISTAL compiles a tensor algebra domain specific language (DSL) into C++ code targeting the Legion runtime. DISTAL allows for the separate specification of 1) desired tensor computation, 2) sparse data format of each operand, 3) the distributed algorithm to use, and 4) the data distribution of the operands. This flexibility allows for the high level description of many kernels of interest within SciPy. The constraint solver discussed in Section 4.1 considers the existing data distributions of regions, so we only use the first 3 input languages of DISTAL. DISTAL generates code directly targeting the Legion API, so we perform slight manual modifications to the generated code to target our higher-level abstractions; these changes could be automated, but we have not found the manual work to be burdensome for developing our prototype.
DISTAL code to generate a distributed and multi-threaded CPU SpMV is found in Figure 6, the generated C++ task body is found in Figure 7, and the constraint based task launching code in Figure 4 is the result of adapting DISTAL-generated C++ task launching code. The DISTAL C++ code declares some runtime parameters, initializes the tensor operands, describes the desired computation, and then schedules the computation for the target machine. The algorithm specified by the scheduling language distributes the rows of the matrix across all processors, and then parallelizes execution across the rows between CPU threads. To achieve peak performance on GPUs, we hand-modified the DISTAL-generated CUDA code to make calls into cuSPARSE when applicable. In our experience, this aspect was the most error prone step in developing the sparse linear algebra kernel implementations and could be made easier in the future with better compiler support for external library interaction, such as in the Mosaic system [4].
5.2 Porting SciPy and CuPy Implementations
The largest subset (156/176 functions) of our implementation of SciPy Sparse was done by porting existing implementations of the API in SciPy and CuPy. While developing Legate Sparse, we found that many functions in SciPy Sparse were implemented using parallel NumPy operations and previously defined SciPy Sparse kernels. By focusing our system design on composability with cuNumeric, we were able to bootstrap our library with itself and
cuNumeric to obtain distributed and accelerated implementations of these functions without any distributed programming.
The classes of functions that we were able to directly port varied in complexity. The simplest of these functions were non-zero preserving, element-wise, unary operations on sparse matrices that are implemented by using the corresponding NumPy operation on the array storing the values of the sparse matrix. Some more complicated ported functions include computing sums across different axes of sparse matrices, and format conversions between sparse matrix formats. The most complex operations that we directly ported to Legate Sparse were higher-level operations such as solves and integrations. We ported several iterative linear solvers (CG, CGS, BiCG, BiCGSTAB, GMRES), Runge-Kutta integration and eigensolvers from SciPy and cuPy implementations to distributed implementations using Legate Sparse and cuNumeric.
5.3 Hand-Written Implementations
The final group of functions in Legate Sparse were those that required completely hand-written implementations. These functions include sorts and auxiliary operations that are implemented within SciPy with calls to C/C++ or Python loops that directly index into NumPy arrays. For these operations, we developed distributed and accelerated implementations using the constraint-based parallelism framework discussed in Section 4.1 paired with C/C++ and CUDA code for tasks adapted from the SciPy implementations.
5.4 Unimplemented Components
Having covered how we implemented components of SciPy Sparse, we now discuss the remaining portions of the API and the path forward to implementing them. Out of the 316 remaining functions in SciPy Sparse, 116 are defined on sequential matrix formats (list-of-lists and dictionary-of-keys) used for matrix assembly in shared memory, which we do not plan to support. 72 of the remaining 200 functions are defined on the BSR (block sparse rows) sparse matrix format, which we plan to support, and are able to use DISTAL to generate kernels for. This leaves 128 functions in SciPy Sparse defined on sparse matrix formats that we support in Legate Sparse (CSR, CSC, DIA, COO). Of these functions, we believe there is a path forward to a nearly complete implementation: 8 are possible to generate with DISTAL, 44 are possible to port from SciPy, 60 require a combination of porting and hand-writing, and 14 are specific to SciPy’s implementation. The functions that require hand-writing cover different components of the API, including sparse matrix reshaping operators, operators that slice and update pieces of sparse matrices, and functions that call to external libraries like SuperLU.
6 EVALUATION
**Experimental Setup.** We evaluated the performance of Legate Sparse on the Summit supercomputer. Each Summit node has a 40 core dual socket IBM Power9. Each socket has three NVIDIA Volta V100s connected by NVLink 2.0, for a total of six GPUs per node. Each node is connected by an Infiniband EDR interconnect. We compile all code using GCC 9.3.0 and CUDA 11.0.2. Legion was configured with GASNet 2022.9.0 for inter-node communication.
**Overview.** We evaluate the performance of Legate Sparse by testing it on a set of SciPy programs from the scientific computing and machine learning domains. The set of benchmarks range from twenty-five to nearly a thousand lines of code and from microbenchmarks to full applications, displaying the complexity of the programs Legate Sparse is able to execute. All of the benchmarks use cuNumeric, and stress the interaction with Legate Sparse.
We measure Legate Sparse’s performance running in both CPU-only and GPU-only settings, allowing us to compare against systems that only support CPUs or GPUs. On a single node, we compare against the standard implementations of SciPy and NumPy for CPUs, and CuPy for GPUs. CuPy provides a drop-in replacement for the SciPy and NumPy APIs, but can only utilize a single GPU. On multiple nodes, we compare (when a hand-tuned baseline exists) against the industry-standard PETSc sparse linear algebra library, which supports both CPUs and GPUs. PETSc provides a C API with high-level linear algebra operations similar to SciPy, but requires users to both specify low-level details about partitioning and distribution and hand-write many distributed NumPy-like array computations. For all experiments, we collect 12 runs of data points, drop the fastest and slowest runs, and then average the results of the remaining 10 runs.
6.1 Weak Scaling Experiments
In this section, we evaluate the weak-scaling performance of Legate Sparse, emulating a usage where users increase the size of their machine to scale to larger data sets. For all benchmarks but the quantum simulation, we compare the performance between one socket of CPUs and the three GPUs connected to that socket. However, we start the weak-scaling at one GPU to compare performance with CuPy. We plot throughput on a log-log plot due to the order-of-magnitude difference in performance between various systems.
**SpMV Microbenchmark.** Our first experiment is a microbenchmark for the scaling of the SpMV operations on banded sparse matrices. This benchmark is trivially parallel with no communication, and Figure 8 shows that both Legate Sparse and PETSc achieve perfect weak scaling. Most SciPy operations are single-threaded and cannot benefit in performance from additional cores or memory bandwidth of an additional GPU socket, resulting in no weak-scaling. As discussed in Section 3, our choice of using a global sparse matrix representation in Legate Sparse incurs some overhead from reshaping operations to the local partitions of the sparse matrices before passing the resulting local matrices to cuSPARSE, resulting in the slight performance differences between Legate Sparse versus CuPy and PETSc.
**CG Solver.** We implemented a conjugate-gradient iterative linear solver for a 2-D Poisson problem, with the results displayed in Figure 9. As with the previous experiment, we compare both modes of Legate Sparse to the same code run in SciPy and CuPy, and a comparable implementation in PETSc. As seen in the SpMV microbenchmark, Legate Sparse’s CPU mode outperforms SciPy due to being multi-threaded. Legate Sparse and PETSc achieve nearly perfect weak scaling on CPUs, with PETSc slightly outperforming Legate Sparse, as Legion reserves some CPU resources for runtime work. On GPUs, CuPy, Legate Sparse and PETSc have similar performance on a single GPU, with Legate Sparse achieving 85%
The Legion developers are aware of this issue, and plan to address it in the future.
This benchmark was inspired by, but is not directly comparable to HPCG [14].
Quantum Simulation. We develop a Legate Sparse quantum simulation of Rydberg atom arrays. The simulation can be used to solve Maximum Independent Set (MIS) problems, as pioneered by the group of Mikhail D. Lukin and QuEra Computing [10]. Like previous implementations [1], we significantly reduce the memory footprint of the simulation by including only states that are allowed by the Rydberg blockade mechanism [19]. Likewise, the interactions between states are rather sparse, as they only permit transition between states in adjacent excitation manifolds and otherwise identical excitation structure. Competing quantum dynamics, namely the energy terms stemming from laser detuning of the system, are inherently sparse due to their diagonal action. Nevertheless, the exponential growth of the quantum state space is only partially stymied by exploiting inherent problem structure, so the simulation remains memory hungry. This application was developed in Python without any expectation that it would be eventually executed in a distributed fashion; the algorithms used in the simulation could be tuned to achieve more scalable performance. We aimed to maximize scale of the Python application as-is, and were able to achieve the exact simulation of the full wave function on larger problem sizes than SciPy.
The core computational component of this benchmark is an 8th-order Runge-Kutta integration. Similarly to the GMG benchmark, we compare against SciPy and CuPy. Due to the nature of the
percent of the performance of PETSc. PETSc and Legate Sparse then weak-scale from a single GPU, where PETSc achieves nearly perfect weak scaling, starting to fall off slightly at 192 GPUs. Legate Sparse also scales well, but experiences some performance drop-off at 32 nodes due to the fast GPU kernels exposing overheads in Legion’s all-reduce implementation1, causing the dot-product communication in the CG solve to affect Legate Sparse’s performance at a smaller processor count than PETSc. At 192 GPUs, Legate Sparse achieves 65% percent of PETSc’s performance.
Multi-grid Solver. We implement a two-level geometric multi-grid conjugate gradient solver, which uses the injection restriction operator and a weighted Jacobi smoother2. Multi-grid methods are known to be relatively challenging to implement correctly and efficiently on distributed machines — our implementation is 300 lines of Python. We do not have a distributed reference implementation, so we compare Legate Sparse’s CPU mode to SciPy, Legate Sparse’s GPU mode to CuPy, and then weak-scale to larger machines. Figure 10 contains the weak-scaling results for the geometric multi-grid solver. As with prior experiments, Legate Sparse’s CPU version significantly outperforms SciPy and has good weak-scaling to 64 sockets. On a single GPU, CuPy is 30% faster than Legate Sparse’s GPU version. This performance difference is caused by overheads in the Legion library due to its Python implementation. During the V-cycle of the multi-grid method, the application launches several tasks small enough to expose overheads in Legion’s task launching and metadata management. Legate Sparse’s GPU version starts off weak-scaling well, but has kernels that run fast enough to expose overheads in Legion that could be fixed in the future with tracing [18] and task fusion [32]. Similar performance on a preconditioned CG solver was seen by Bauer et al. [5]. Despite the imperfect weak scaling, Legate Sparse is able to execute the Python multi-grid solver on accelerated hardware much faster and on larger problem sizes than SciPy.
application, we were unable to exert fine-grained control over the input size: we could only approximately double the problem size. Therefore, we utilize 4 of the 6 GPUs on each Summit node for this benchmark to directly compare weak-scaling performance between CPUs and GPUs. We stress that Legate Sparse can successfully utilize all 6 GPUs per node for standard runs of the simulation.
The weak-scaling results are found in Figure 11. As with prior experiments, Legate Sparse significantly outperforms the standard implementation of SciPy. On a single GPU, CuPy achieves a 40% speedup over Legate Sparse, for a similar reason as the GMG benchmark — several small tasks launched in the integration expose overheads in Legate. The simulation experiences a loss in weak-scaling efficiency as the number of processors increases. This fall-off is expected due to the communication pattern of the application: the sparse matrices that describe the atomic relationships have a very high bandwidth (the coordinates in a row reference a wide range of columns). Our profiling shows that the algorithms used by the application require every processor to exchange tens to hundreds of megabytes of data with at least half of the other processors in the system, almost an all-to-all communication pattern.
At 1 to 4 GPUs, Legate Sparse’s GPU version significantly outperforms the CPU version, due to utilizing the higher-bandwidth NVLink. Once inter-node communication over Infiniband is required after 4 GPUs, the GPU version has similar performance as the CPU version, even dropping below the CPU performance at 16 GPUs. This drop is due to the ratio of communication to effective bandwidth available between each experiment: at 16 GPUs, Legate Sparse’s GPU version is utilizing 4 nodes of network hardware, while Legate Sparse’s 16-socket CPU version is using 8 nodes to exchange the same amount of data, thus having double the network bandwidth available to communicate through. Finally, the large halo regions present in the application result in imperfect weak scaling of the memory usage per processor, causing Legate Sparse’s 64 GPU version to run out of memory.
6.2 Sparse Machine Learning
To evaluate the potential of Legate Sparse as a high-level programming model for sparse machine learning applications, we implement the sparse matrix factorization algorithm with bias [15]. We optimize our model with mini-batch SGD [28], and use a closed-source sparse autograd procedure to generate Python source code for the gradient, which we hand-optimized to remove redundant computations and to exploit sparsity patterns. We compare against CuPy, and measure training throughput in terms of samples per second on the 10 million (10m), 25 million (25m), 50 million (50m) and 100 million (100m) MovieLens datasets [12]. The 50m and 100m datasets were derived from the 20m dataset using randomized fractional expansions [7]. The training loop loads the input dataset into host memory, shuffles the training data before each epoch, and constructs batches of sparse matrices from samples of the training data to update the model parameters. Our implementation falls within 99.7% of SOTA prediction performance for the 10m dataset [26, 27]. The results for these experiments are found in Figure 12.
A key optimization in our implementation is the use of the SDDMM (sampled dense-dense matrix multiplication) operation to avoid materializing dense matrices in expressions of the form $A \odot (B \cdot C)$, where $A$ is sparse and $B, C$ are dense. We generated a high-performance distributed SDDMM implementation using DISTAL, and exposed cuSPARSE’s SDDMM kernel for CuPy to use, since CuPy did not support SDDMM out-of-the-box.
We ran each dataset with CuPy on a single GPU, and found that it could only fit the 10m and 25m datasets without running out of memory. In contrast, Legate Sparse can scale to the larger datasets without code modifications by simply adding more GPUs, handling the 50m and 100m datasets with 6 GPUs and 12 GPUs respectively. CuPy achieves a 2.8x speedup over Legate Sparse on the 25m dataset. Similar to prior experiments, this performance difference arises from overheads in Legate exposed by small tasks launched by the application. Next, CuPy processes the 25m dataset on a single GPU, but achieves nearly half the throughput of Legate Sparse. CuPy runs close to the GPU memory limit on the 25m dataset, and Legate Sparse is unable to do the same due to reserved GPU memory for Legion and external CUDA libraries. We saw that cuSPARSE’s SDDMM kernel was inefficient compared to DISTAL’s kernel: the SDDMM began to dominate CuPy’s execution time on the 25m dataset, while remaining a small percentage of total execution time for Legate Sparse, leading to the speedup on 2 GPUs. Finally, while Legate Sparse can execute the 50m and 100m datasets, it experiences some performance degradation at larger scales. This is due to all-to-all communication patterns inherent in the factorization algorithm, which performs several dense matrix transpose operations in the gradient computation. The effect is more noticeable on the 100m...
dataset: 12 GPUs is two nodes of Summit, so many communications go through the lower bandwidth Infiniband instead of NVLink.
7 RELATED WORK
**Distributed Sparse Linear Algebra and Tensor Algebra Libraries.** Distributed sparse linear algebra has received tremendous attention from the community. The industry-standard sparse linear algebra packages PETSc [3, 20] and Trilinos [34] are long-lasting results of this research. These systems contain a wide variety of sparse linear algebra operations, many of which have been ported to GPUs. However, these systems offer a lower-level API than SciPy (and Legate Sparse), and exist within an explicitly-parallel, message-passing based environment, requiring some expertise in parallel and distributed programming. Additionally, it is not always straightforward to integrate these large systems with external libraries.
The Cyclops Tensor Framework (CTF) [30, 31] is an explicitly-parallel library for distributed dense and sparse tensor algebra that provides a tensor summation notation-based API, similar to DISTAL. CTF has a flexible API for tensor computations, which lacks composability with a NumPy-like dense array programming library. Accelerated and Distributed NumPy: Replacing the NumPy and SciPy API is a common approach to accelerating these programs. Several systems exist that accelerate and distribute the NumPy API that we will discuss, but we are not aware of any existing system that successfully distributes the SciPy sparse matrix APIs.
We first discuss systems that target a single node. CuPy accelerates both NumPy and SciPy code on a single GPU by offloading NumPy and SciPy calls to corresponding kernels on the GPU. CuPy can execute in a multi-GPU environment, but requires users to manage data movement and synchronization between the GPUs. Grumpy [25] and Bohrium [16] are systems that lazily evaluate NumPy programs and then generate optimized code for CPUs and a GPU. Weld [23] is a composability-focused system (like Legate Sparse) that provides a drop-in replacement for NumPy and Pandas programs targeting CPUs and a GPU. Similarly to Grumpy and Bohrium, Weld lazily evaluates the input program and performs cross library optimizations like fusion to increase efficiency.
Dask [29] is a popular library for distributed computing in Python with a high-level array library similar to NumPy. NumS [11] is a distributed NumPy replacement built on top of the Ray [21] task-based runtime system targeting clusters of CPUs. Jax [9] is a drop-in replacement for NumPy with support for vectorization, automatic differentiation and fusion. Jax can target distributed machines, but has restrictions on the kinds of partitioning and distribution it can perform. cuNumeric [5] (formerly known as Legate NumPy) is a library that provides a drop-in, distributed backend for NumPy, targeting both clusters of CPUs and GPUs. cuNumeric shares a similar architecture as Legate Sparse, and our work focuses on maintainable and performant composability with cuNumeric. These systems target NumPy computations, and are not able to execute SciPy operations on sparse matrices, unlike Legate Sparse.
DaCe [38] accelerates annotated Python and NumPy programs onto distributed clusters of CPUs and accelerators by translating them into a high level representation called Stateful Dataflow Multi-graphs (SDFGs) [8] and performing a series of optimizations on this representation. While the SDFG representation allows for optimizations such as reordering and fusing computation, DaCe requires both code changes to use and explicit partitioning and message passing between memories. As such, DaCe inhabits a different part of the design space than the part targeted by Legate Sparse.
8 CONCLUSION
We have introduced Legate Sparse, a system that distributes and accelerates unmodified SciPy Sparse programs while composing with cuNumeric. Developing Legate Sparse involves solving composability problems across the software stack; we integrate the libraries at the distributed layer through a constraint-based partitioning scheme and a dynamic runtime system, and use the DISTAL compiler to generate kernel variants for different sparse data structures and heterogeneous processors. Moving forward, the strategy used in Legate Sparse provides a model that others may use to develop high-performance distributed libraries. We believe the ideas in Legate Sparse form a path towards an ecosystem of distributed libraries that compose and share data like the standard Python computing ecosystem.
ACKNOWLEDGEMENTS
We thank our anonymous reviewers for their valuable comments that helped us improve this manuscript. We thank Olivia Hsu, Scott Kovach, Shiv Sundram, Bobby Yan, AJ Root, Manya Bansal, Pra-neeth Kolichala, Pat McCormick and Torsten Hoefler for their comments and discussions on early stages of this manuscript. We thank Steven Dalton for his help with developing prototype linear solvers in Legate Sparse. Rohan Yadav was supported by an NSF Graduate Research Fellowship, and part of this work was done while Rohan Yadav was an intern at NVIDIA Research. This work was supported by the Advanced Simulation and Computing (ASC) program of the US Department of Energy’s National Nuclear Security Administration (NNSA) via the PSAAP-III Center at Stanford, Grant No. DE-NA0002373, by the Department of Energy’s Office of Advanced Scientific Computing Research (ASCR) under contract DE-AC03-76SF00515, and by NSF grant CCF-2216964.
REFERENCES
|
{"Source-Url": "http://theory.stanford.edu/~aiken/publications/papers/sc23b.pdf", "len_cl100k_base": 12359, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42147, "total-output-tokens": 13821, "length": "2e13", "weborganizer": {"__label__adult": 0.00042319297790527344, "__label__art_design": 0.0005230903625488281, "__label__crime_law": 0.0004153251647949219, "__label__education_jobs": 0.0013294219970703125, "__label__entertainment": 0.00017380714416503906, "__label__fashion_beauty": 0.00024211406707763672, "__label__finance_business": 0.0003840923309326172, "__label__food_dining": 0.0005049705505371094, "__label__games": 0.0009737014770507812, "__label__hardware": 0.0021800994873046875, "__label__health": 0.000823974609375, "__label__history": 0.0005626678466796875, "__label__home_hobbies": 0.00014531612396240234, "__label__industrial": 0.000995635986328125, "__label__literature": 0.0003552436828613281, "__label__politics": 0.0005064010620117188, "__label__religion": 0.0007710456848144531, "__label__science_tech": 0.36572265625, "__label__social_life": 0.0001767873764038086, "__label__software": 0.0159149169921875, "__label__software_dev": 0.60546875, "__label__sports_fitness": 0.0004324913024902344, "__label__transportation": 0.0008845329284667969, "__label__travel": 0.0002722740173339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63246, 0.02796]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63246, 0.32898]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63246, 0.88757]], "google_gemma-3-12b-it_contains_pii": [[0, 4659, false], [4659, 10816, null], [10816, 16990, null], [16990, 24383, null], [24383, 33087, null], [33087, 36801, null], [36801, 40565, null], [40565, 47127, null], [47127, 50880, null], [50880, 56032, null], [56032, 63246, null], [63246, 63246, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4659, true], [4659, 10816, null], [10816, 16990, null], [16990, 24383, null], [24383, 33087, null], [33087, 36801, null], [36801, 40565, null], [40565, 47127, null], [47127, 50880, null], [50880, 56032, null], [56032, 63246, null], [63246, 63246, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63246, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63246, null]], "pdf_page_numbers": [[0, 4659, 1], [4659, 10816, 2], [10816, 16990, 3], [16990, 24383, 4], [24383, 33087, 5], [33087, 36801, 6], [36801, 40565, 7], [40565, 47127, 8], [47127, 50880, 9], [50880, 56032, 10], [56032, 63246, 11], [63246, 63246, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63246, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
08982feaff73f388275fcf909f2bf0fd52b7a26d
|
Boolean Functions with Ordered Domains in Answer Set Programming
Mario Alviano
University of Calabria, Italy
alviano@mat.unical.it
Wolfgang Faber
University of Huddersfield, UK
wf@wfaber.com
Hannes Strass
Leipzig University, Germany
strass@informatik.uni-leipzig.de
Abstract
Boolean functions in Answer Set Programming have proven a useful modelling tool. They are usually specified by means of aggregates or external atoms. A crucial step in computing answer sets for logic programs containing Boolean functions is verifying whether partial interpretations satisfy a Boolean function for all possible values of its undefined atoms. In this paper, we develop a new methodology for showing when such checks can be done in deterministic polynomial time. This provides a unifying view on all currently known polynomial-time decidability results, and furthermore identifies promising new classes that go well beyond the state of the art. Our main technique consists of using an ordering on the atoms to significantly reduce the necessary number of model checks. For many standard aggregates, we show how this ordering can be automatically obtained.
Introduction
Answer set programming (ASP) is a declarative language for knowledge representation and reasoning (Brewka, Eiter, and Truszczyński 2011). ASP programs are interpreted according to the stable model semantics (Gelfond and Lifschitz 1988; 1991), and several definitions were proposed for extensions of the basic language. A particularly useful construct of ASP are aggregate functions (Simons, Niemelä, and Soininen 2002; Liu et al. 2010; Bartholomew, Lee, and Meng 2011; Pelov, Denecker, and Bruynooghe 2007; Son and Pontelli 2007; Shen et al. 2014; Faber, Pfeifer, and Leone 2011; Ferraris 2011; Alviano et al. 2011; Gelfond and Zhang 2014), which allow for expressing properties on sets of atoms declaratively and in a space-efficient way. For example, aggregates are widely used to enforce functional dependencies, where a rule of the form
\( \perp \leftarrow \text{node}(X), \ \text{COUNT} \{ \{ C \mid \text{hasColour}(X, C) \} \} \neq 1 \)
in a graph-colouring problem asserts that the colour of a node is a functional property. On the other hand, aggregates often make the evaluation of programs harder. In fact, the three-valued evaluation of an aggregate, that is, its evaluation with respect to a partial interpretation \( I \), depends in general on the evaluation of the aggregate with respect to exponentially many totalisations of \( I \).
It is important to observe that many of the semantics proposed for interpreting ASP programs with aggregates are not limited to common aggregation functions such as COUNT, SUM, and AVG, but are instead defined for Boolean functions in general (Liu and Truszczynski 2006; Alviano and Faber 2015). In fact, from a semantic viewpoint an aggregate is seen as a black box whose relevant property is the induced (partial) Boolean function mapping (partial) interpretations to Boolean truth values. For example, \( \text{SUM}(\{1 : p, -1 : q\}) \geq 0 \) maps to true any (partial) interpretation assigning true to \( p \) or false to \( q \), and maps to false any (partial) interpretation assigning false to \( p \) and true to \( q \).
It is thanks to this association with Boolean functions that the several semantics for ASP programs with aggregates are defined clearly and uniformly: stable models are defined for programs with Boolean functions in general, and any aggregation function can be added to the language by specifying the associated Boolean function. Using Boolean functions also makes the same definitions of semantics applicable to similar language extensions, such as external or HEX atoms (Eiter et al. 2014).
Another advantage of this generality is the identification of semantic classes of programs with benign computational properties. For example, programs with monotone and convex (Liu and Truszczynski 2006) Boolean functions are associated with lower complexity classes in many cases (Faber, Pfeifer, and Leone 2011). Many other tractability results were proven for programs with non-convex aggregates of specific forms (Pelov 2004; Son and Pontelli 2007), providing ad-hoc proofs for each considered case. These results hold for stable models as defined by Pelov, Denecker, and Bruynooghe (2007) and Son and Pontelli (2007), for which tractability of the three-valued evaluation of aggregates implies tractability of the stability check.
Boolean functions were also considered in a related knowledge representation formalism called abstract dialectical frameworks (ADFs, Brewka and Woltran; Brewka et al., 2010; 2013). There, argumentation scenarios are modelled in terms of arguments and possible relationships between arguments. In the class of bipolar ADFs, relationships between arguments are restricted to supports and attacks, which decreases the computational complexity by one level in the polynomial hierarchy (Strass and Wallner 2015).
It is interesting to observe that, under some syntactic restric-
tions, the notion of stable model for ADFs by Brewka et al. (2013) coincides with the definition of stable model for ASP programs by Pelov, Denecker, and Bruynooge (2007) and Son and Pontelli (2007), as was recently observed (Alviano and Faber 2015). We use a similar observation to define a new class of Boolean functions (and thus a new class of aggregates in ASP). In fact, introducing the class of bipolar Boolean functions is natural, and tractability is easily obtained by transferring the complexity results of Strass and Wallner (2015). It was unexpected, however, that many tractable cases proved by Pelov (2004) and Son and Pontelli (2007) are actually bipolar, and therefore could be obtained uniformly and in a straightforward way by our results.
While a significant number of standard aggregates leads to bipolar Boolean functions, there are still cases that are known to be polytime-checkable but are not bipolar. For example, \( \text{COUNT}(\{p, q\}) = 1 \) is convex but not bipolar, and \( \text{COUNT}(\{p, q\}) \neq 1 \) is polytime-checkable (Son and Pontelli 2007) but neither convex nor bipolar.
As the main contribution of this paper, we introduce a new class of Boolean functions whose three-valued evaluation can be done in polynomial time; we call them atom-orderable Boolean functions. Our results are also transferable to ADFs with the stable model semantics defined by Brewka et al. (2013) thanks to the results of Alviano and Faber (2015). In particular, this identifies a larger tractable class of ADFs under this semantics than previously known.
**Preliminaries**
Let \( A \) be a finite set of (propositional) atoms. Boolean functions map sets of atoms to Boolean truth values. For convenience, we will usually represent a Boolean function as the set of sets mapped to true. Hence, a **Boolean function** is a set \( C \subseteq 2^A \). In addition to this abstract representation of Boolean functions, we will also use common notations for denoting aggregates. Formally, let \( a_1, \ldots, a_m \in A \) be atoms and \( w_1, \ldots, w_m \in \mathbb{R} \) be real numbers \((m \geq 0)\). A **weighted atom set** over \( A \) is of the form \( S = \{w_1 : a_1, \ldots, w_m : a_m\} \).
For such a set, we denote \( A(S) = \{a_1, \ldots, a_m\} \). Using a comparison \( \circ \in \{<, \leq, =, \neq, \geq, >\} \) and a value \( v \in \mathbb{R} \), the following expressions represent Boolean functions:
\[
\begin{align*}
\text{SUM}(S) \circ v & \equiv \left\{ M \subseteq 2^A(S) \mid \left( \sum_{a_i \in M} w_i \right) \circ v \right\} \\
\text{PROD}(S) \circ v & \equiv \left\{ M \subseteq 2^A(S) \mid \prod_{a_i \in M} w_i \circ v \right\} \\
\text{COUNT}(S) \circ v & \equiv \left\{ M \subseteq 2^A(S) \mid |M \circ v| \right\} \\
\text{AVG}(S) \circ v & \equiv \left\{ M \subseteq 2^A(S) \mid \frac{\sum_{a_i \in M} w_i}{|M|} \circ v \right\} \\
\text{MIN}(S) \circ v & \equiv \left\{ M \subseteq 2^A(S) \mid \min \{w_i \mid a_i \in M\} \circ v \right\} \\
\text{MAX}(S) \circ v & \equiv \left\{ M \subseteq 2^A(S) \mid \max \{w_i \mid a_i \in M\} \circ v \right\}
\end{align*}
\]
Note that the definition of \( \text{COUNT}(S) \circ v \) does not depend on the weights in \( S \), and therefore in this case we will usually omit the weights and only specify the atoms in \( A(S) \).
A **logic program** \( P \) is a set of **rules** of the following form:
\[
a\leftarrow a_1, \ldots, a_l, \not b_1, \ldots, \not b_m, C_1, \ldots, C_n \quad (1)
\]
where \( l, m, n \in \mathbb{N} \) are natural numbers, \( a \in A \) (the head), \( a_1, \ldots, a_l, b_1, \ldots, b_m \in A \) and \( C_1, \ldots, C_n \) are Boolean functions (the body). We also write rules (1) as \( r = a \leftarrow b \) and use the notations \( B^+ = \{a_1, \ldots, a_l\}, B^- = \{b_1, \ldots, b_m\} \) and \( B^C = \{C_1, \ldots, C_n\} \) to access body constituents.
A **partial interpretation** is represented by a pair \((X, Y)\) of sets of atoms with \( X \subseteq Y \subseteq A \), where the atoms in the **lower bound** \( X \) are true and the atoms not in the **upper bound** \( Y \) are false. Thus, the atoms in \( Y \setminus X \) are neither true nor false, that is, they do not have a classical truth value yet and are therefore **undefined**. A partial interpretation \((X, Y)\) is a model of a Boolean function \( C \), denoted \((X, Y) \models C\), if, for all \( Z \subseteq A \) with \( X \subseteq Z \subseteq Y \), we find \( Z \in C \). Given a partial interpretation \((X, Y)\) and a Boolean function \( C \), the (three-valued) **model checking** problem consists of verifying whether \((X, Y) \models C\) holds.
The semantics of a logic program \( P \) is given by the set of its stable models, where a stable model is a set of atoms satisfying some stability condition. In this paper, the stability condition is given by means of the least fixpoint of an inference operator. Formally, for each \( Y \subseteq A \), define an operator \( T^N_Y : 2^A \rightarrow 2^A \) such that:
\[
X \mapsto \{a \in A \mid a \leftarrow B \in P, B^+ \cup X, B^- \cap Y = \emptyset, (X, Y) \models D \text{ for all } D \in B^C\}.
\]
A set \( M \subseteq A \) is a **stable model** of \( P \) if and only if \( M \) is the \( \subseteq \)-least fixpoint of \( T^N_P \). Clearly if \((X, Y) \models D\) then \((Z, Y) \models D\) for all \( X \subseteq Z \subseteq Y \), thus the operator \( T^N_P \) is \( \subseteq \)-monotone and always has a unique \( \subseteq \)-least fixpoint.
**Example 1.** Consider \( A = \{a, b, c\} \) and logic program \( P \):
\[
a \leftarrow \text{SUM}(\{1 : b, 1 : c\}) > 0 \\
b \leftarrow a, \not c \\
c \leftarrow \not b
\]
The only two candidates for stable models are \( M = \{a, b\} \) and \( N = \{a, c\} \). For the first candidate, we find that:
- \( a \notin T^N_P(\emptyset) \) since \( (\emptyset, M) \not\models \text{SUM}(\{1 : b, 1 : c\}) > 0 \),
- \( b \notin T^N_P(\emptyset) \) since \( a \notin \emptyset \) and \( c \notin T^N_P(\emptyset) \) since \( b \in M \).
Thus \( T^N_P(\emptyset) = \emptyset \), which means that the \( \subseteq \)-least fixpoint of \( T^N_P \) is \( \emptyset \). Since \( \emptyset \notin M \), the set \( M \) is not a stable model of \( P \).
For the other candidate \( N = \{a, c\} \), we get \( T^N_P(\emptyset) = \{c\} \):
- \( a \notin T^N_P(\emptyset) \) since \( (\emptyset, N) \not\models \text{SUM}(\{1 : b, 1 : c\}) > 0 \),
- \( b \notin T^N_P(\emptyset) \) since \( a \notin \emptyset \) and \( c \notin T^N_P(\emptyset) \) since \( b \notin N \),
and then \( T^N_P(\{c\}) = \{a, c\} = N = T^N_P(N) \) because:
- \( \{c\}, N \models \text{SUM}(\{1 : b, 1 : c\}) > 0 \) and
- \( (N, N) \models \text{SUM}(\{1 : b, 1 : c\}) > 0 \).
Thus the \( \subseteq \)-least fixpoint of \( T^N_P \) is the set \( N = \{a, c\} \), whence this set is also the only stable model of \( P \). ▲
Note that the definition of stable model above is equivalent to the one given by Pelov, Denecker, and Bruynooghe (2007), and Son and Pontelli (2007). We reformulated it in this way to clarify that model checking is the potentially most complex part of verifying whether a given set of atoms is a stable model of a logic program. In fact, dealing with undefined atoms during the computation of the least fixpoint of $T^P_f$ is the main source of complexity for checking the stability of a set of atoms. This is the case because in general each Boolean function in $P$ has to be evaluated with respect to a number of sets of atoms that is exponential in the number of undefined atoms. However, it is important to observe that in practice many Boolean functions do not require to be evaluated on exponentially many sets of atoms in order to answer the associated model checking problem. For these reasons, we focus on identifying sufficient conditions for guaranteeing tractability of model checking, which in turn implies tractability of the stability check for logic programs.
Actually, for some subclasses of Boolean functions, the model checking problem is already known to be tractable. One example are convex Boolean functions, that intuitively do not contain "gaps" in their sets of models. Formally, a Boolean function $C \subseteq 2^A$ is convex if and only if for all $X \subseteq Y \subseteq Z \subseteq A$ we have: if $X \in C$ and $Z \in C$ then $Y \in C$. It is well-known that convex Boolean functions are closed under arbitrary conjunctions, but not under complementation and disjunction.
In the next section, we will introduce the notion of bipolar Boolean functions, a different class closed under complementation, conjunction and disjunction subject to some compatibility conditions. Later on, we will present the class of atom-orderable Boolean functions, an extension of both convex and bipolar that also includes other standard aggregates commonly used in logic programming.
### Bipolar Boolean Functions
Bipolarity has hitherto predominantly been defined and used in the context of ADs (Brewka and Woltran 2010). Here, we define bipolarity for Boolean functions in general by extending the notions of monotone and antimonotone Boolean functions (Faber, Pfeifer, and Leone, 2011, Def. 2.4).
**Definition 1.** Let $A$ be a set of atoms, $C \subseteq 2^A$ be a Boolean function, and $a \in A$.
- $C$ is monotone in $a$ iff for all $M \subseteq A$, we find that: $M \in C$ implies $M \cup \{a\} \in C$;
- $C$ is antimonotone in $a$ iff for all $M \subseteq A$, we find that: $M \notin C$ implies $M \cup \{a\} \notin C$.
Define the sets
$A^+_C = \{a \in A \mid C \text{ is monotone in } a\}$,
$A^-_C = \{a \in A \mid C \text{ is antimonotone in } a\}$.
A Boolean function $C \subseteq 2^A$ is:
- monotone iff $A = A^+_C$;
- antimonotone iff $A = A^-_C$;
- bipolar iff $A = A^+_C \cup A^-_C$.
Synonymously to $C$ is monotone in $a$, we say that $a$ is supporting in $C$; likewise, $C$ is antimonotone in $a$ iff $a$ is attacking in $C$. Being supporting or attacking is the polarity of the argument $a$ in $C$. As all atoms $a \in A^+_C \cap A^-_C$ are redundant, we also use the sets of strictly supporting arguments $A^+_C \setminus A^-_C$ and strictly attacking arguments $A^-_C \setminus A^+_C$.
First of all, we observe that the class of bipolar Boolean functions captures quite a range of standard aggregates, as shown below.
**Proposition 1.** Let $A$ be a vocabulary, $S$ be a weighted atom set over $A$ and $v \in \mathbb{R}$. The following Boolean functions are bipolar:
1. $\text{SUM}(S) \circ v$ for $o \in \{<, \leq, \geq, >\}$;
2. $\text{COUNT}(S) \circ v$ for $o \in \{<, \leq, \geq, >\}$;
3. $\text{AVG}(S) \circ v$ for $o \in \{<, \leq, \geq, >\}$;
4. $\text{MIN}(S) \circ v$ for $o \in \{<, \leq, =, \geq, >\}$;
5. $\text{MAX}(S) \circ v$ for $o \in \{<, \leq, =, \geq, >\}$.
**Proof.** 1. For $o \in \{<, \leq\}$, atoms $a_i$ with non-negative weights ($w_i \geq 0$) are attacking, atoms $a_i$ with non-positive weights ($w_i \leq 0$) are supporting. For $o \in \{\geq, >\}$, atoms with non-negative weights are supporting, atoms with non-positive weights are attacking.
2. For $o \in \{<, \leq\}$ all atoms are attacking, for $o \in \{\geq, >\}$ all atoms are supporting.
3. For $o \in \{<, \leq\}$, all atoms $a_i$ with weight $w_i \geq v$ are attacking, atoms $a_i$ with weight $w_i \leq v$ are supporting; $o \in \{\geq, >\}$ is symmetric (atoms $a_i$ with weight $w_i \geq v$ are supporting, those with weight $w_i \leq v$ are attacking).
4. For $\text{MIN}(S) = v$, all atoms $a_i$ with weight $w_i \geq v$ are supporting, additionally all atoms $a_i$ with $w_i \neq v$ are attacking. For $\text{MIN}(S) < v$, all atoms are supporting, and additionally all atoms $a_i$ with $w_i \geq v$ are attacking; similarly, for $\text{MIN}(S) \leq v$, all atoms are supporting, and additionally all atoms $a_i$ with $w_i > v$ are attacking. For $o \in \{\geq, >\}$, all atoms $a_i$ with weight $w_i \neq v$ are supporting, all others attacking.
5. Dual to $\text{MIN}(S) \circ o$.
Comparing the different classes of Boolean functions that we introduced so far, we can observe that by definition all monotone Boolean functions are bipolar and convex, but the converse does not hold. It is similar for antimonotone Boolean functions.
**Example 2.** For vocabulary $A = \{a, b\}$, the Boolean function $C_{a \lor b} = \{\{a\}\}$ is bipolar and convex, but neither monotone ($b$ is strictly attacking) nor antimonotone ($a$ is strictly supporting).
Even more importantly, we have to clarify that the two notions bipolar and convex are independent of each other.
**Example 3.** Consider the vocabulary $A = \{a, b\}$. The Boolean function $C_{\neg a \lor b} = \{\emptyset, \{b\}, \{a, b\}\}$ is bipolar ($a$ is strictly attacking, $b$ is strictly supporting), but not convex (for $\emptyset \subseteq \{a\} \subseteq \{a, b\}$, we have that $\emptyset, \{a, b\} \in C_{\neg a \lor b}$ while $\{a\} \notin C_{\neg a \lor b}$). On the other hand, the Boolean function $C_{a \land b} = \{\{a\}\}$ is convex, but not bipolar (for example, $a$ is not supporting, as $\{b\} \in C_{a \land b}$ but $\{a, b\} \notin C_{a \land b}$; neither is $a$ attacking, as $\emptyset \notin C_{a \land b}$ but $\{a\} \in C_{a \land b}$).
Hence, even if there is some overlap, bipolar Boolean functions and convex Boolean functions seem to have orthogonal expressive capabilities. The two classes also differ with respect to closure under common set operators. In fact, it can be shown that the complement of a bipolar Boolean function is again bipolar but with the polarities switched.
**Proposition 2.** Let \( A \) be a set of atoms and \( C \subseteq 2^A \) be a bipolar Boolean function. Then the set \( \overline{C} = 2^A \setminus C \) is a bipolar Boolean function with \( A_{2}^{C} = A_{C}^{C} \) and \( A_{C}^{-} = A_{C}^{-} \).
Therefore, bipolar Boolean functions are closed under complementation, while this is not the case for intersection and union in general.
**Example 4.** Consider the vocabulary \( A = \{a, b, c\} \). For the bipolar Boolean functions \( C_{a \lor b} = \{\{a\}, \{b\}, \{a, b\}\} \) and \( C_{a \land b} \) equal to \( \emptyset \), \( \{\{a\}\} \), \( \{\{b\}\} \), \( \{\{a\}, \{b\}\} \) we get the resulting (non-bipolar) intersection \( C_{a \lor b} \cap C_{a \land b} = \{\{a\}, \{b\}\} = C_{a \land b} \).
However, closure under union and intersection can be regained by stipulating a compatibility condition on bipolar functions.
**Definition 2.** Let \( A \) be a set of atoms and \( C, D \subseteq 2^A \) be bipolar Boolean functions. \( C \) and \( D \) are compatible iff
- \( A_{C}^{C} \cap A_{D}^{C} \subseteq A_{C}^{C} \cup A_{D}^{C} \), and
- \( A_{C}^{-} \cap A_{D}^{-} \subseteq A_{C}^{-} \cup A_{D}^{-} \).
Intuitively, two Boolean functions over the same vocabulary are compatible iff for each atom, the polarities of the arguments in the Boolean functions match point-wise. The polarities match if the argument is supporting in both Boolean functions or attacking in both Boolean functions. Put another way, for two Boolean functions to be compatible, whenever an argument is supporting in one Boolean function and attacking in the other, then it must be redundant in one of them, which is what the definition above says.
**Example 5.** Consider the vocabulary \( A = \{a, b, c\} \). The bipolar Boolean functions \( C_{a \land b} = \{\{a\}, \{a, c\}\} \) (a supporting, \( b \) attacking, \( c \) redundant) and \( C_{a \lor b} = \{\{a\}, \{b\}, \{a, b\}\} \) (a redundant, \( b \) attacking, \( c \) compatible) are compatible: \( A_{a \land b}^{C} = \{\{a\}, \{a, c\}\} \), \( A_{a \lor b}^{C} = \{\{a\}, \{b\}, \{a, b\}\} \)
This function is neither bipolar nor convex, but atom-orderable with \( a < b < c < d \): To show that \( \emptyset \leq \text{COUNT}(\{a, b, c, d\}) \neq 1 \), we need only check 1. \( \emptyset \in C \)? (yes) 2. \( \{a\} \in C ? \) (no) and are done (instead of naively searching among the 16 counterexample candidates). To show that \( \{a, b\} \), \( \{a, b, c\} \), \( \{a, b, c, d\} \) are contained in \( C \).
It is easy to see that condition (2) can be checked in deterministic polynomial time (in \( n \)) whenever \( Z \subseteq C \) can be decided in polynomial time and the ordering \( \leq \) is given.
**Proposition 4.** Let \( A \) be a set of atoms, \( C \subseteq 2^A \) be an atom-orderable Boolean function with \( \leq \) given, and \( X, Y \subseteq C \). Furthermore assume that the problem “given \( Z \subseteq A \), is \( Z \in C ? \)” is in \( \text{P} \). Then checking \( X, Y \models C \) is in \( \text{P} \).
We will usually represent atom-orderable Boolean functions by giving the ordering \( \leq \); if we specify \( \leq \) as a partial order only, then any total order extending \( \leq \) will do as a witness for the Boolean function being atom-orderable. We first show that our new class generalises the class of bipolar Boolean functions.
**Proposition 5.** All bipolar Boolean functions are atom-orderable.
**Proof.** Let \( A \) be a set of atoms and \( C \subseteq 2^A \) be bipolar. Then \( A = A_{C}^{C} \cup A_{C}^{-} \). We define the partial order \( \leq \) such that
\[
A_{C}^{C} \setminus A_{C}^{C} \prec A_{C}^{C} \setminus A_{C}^{C} \prec A_{C}^{C} \cap A_{C}^{-}.
\]
Let \( X \subseteq Y \subseteq A \) be arbitrary. We have to show that conditions (1) and (2) of Definition 3 are equivalent.
\[3\text{Here, } \min_{\leq}(Y \setminus X_{j}) \text{ denotes the } \leq \text{-least element of the set } Y \setminus X_{j}, \text{ which is unique since } \leq \text{ is total and } Y \setminus X_{j} \text{ is non-empty.}\]
(1) ⇒ (2): Assume that for all \( Z \subseteq A \) with \( X \subseteq Z \subseteq Y \), we find \( Z \subseteq C \). Recall that \( X_0 = X \) and for \( 0 \leq j \leq n - 1 \) we set \( X_{j+1} = X_j \cup \min_x(Y \setminus X_j) \). Clearly \( X \subseteq X_i \subseteq Y \) for all \( i \in \{0, \ldots, k\} \), whence (2) follows.
(2) ⇒ (1): Assume that for all \( i \in \{0, \ldots, k\} \), we find \( X_i \subseteq C \). By our definition of \( \preccurlyeq \), there is in particular an \( i \in \{0, \ldots, k\} \) such that \( X_i = X \cup ((A^+_C \setminus A^-_C) \cap Y) \subseteq C \). Now let \( Z \subseteq A \) with \( X \subseteq Z \subseteq Y \) be arbitrary. We have to show \( Z \subseteq C \). Since \( X_i \) contains all attackers and no supporters (relative to \( Y \setminus X_i \)), we can reconstruct \( Z \) from \( X_i \) by “adding” supporters and “removing” attackers: there exist \( Z^+ \subseteq A^+_C \) and \( Z^- \subseteq A^-_C \) such that \( Z = (X_i \cup Z^+) \setminus Z^- \). Since \( X_i \subseteq C \) and \( C \) is bipolar, it follows that \( Z \subseteq C \) as desired. \( \square \)
This new class is furthermore a strict generalisation, as it allows us to treat a maximum possible number of generalisations. This, together with Definition 3, is the main result of the paper.
**Theorem 6.** Let \( A \) be a vocabulary, \( S \) be a weighted atom set over \( A \) and \( v \in \mathbb{R} \). The following Boolean functions are atom-orderable:
1. \( \text{COUNT}(S) \circ v \) for \( \circ \in \{\preccurlyeq, \neq\} \);
2. \( \text{SUM}(S) = v \);
3. \( \text{AVG}(S) = v \);
4. \( \text{MIN}(S) \neq v \);
5. \( \text{MAX}(S) \neq v \);
6. \( \text{PROD}(S) \circ v \) for \( \circ \in \{\leq, \preceq, =\} \):
\[ a_i \mid w_i > 0 \prec a_i \mid w_i = 0 \prec a_i \mid w_i = 0 \]
and furthermore for all \( a_i \in A(S) \) with \( w_i > 0 \), we have \( a_j \preceq a_k \) iff \( w_j \leq w_k \) for \( 1 \leq j, k \leq m \). To figure out whether \( (X, Y) \models \text{PROD}(S) \geq v \), we essentially have to find the set \( Z \) with \( X \subseteq Z \subseteq Y \) such that the value \( \prod_{a_i \in Z \cap A(S)} w_i \) is \( \leq \)-minimal. If there are \( a_i \in Y \setminus X \) with \( w_i \leq 0 \), then they will be detected. Assuming that all weights are positive, the least possible product is given by \( \prod_{a_i \in Y \setminus X, 0 < w_i \leq w_i} w_i \). Due to the increasing ordering of the positively weighted atoms, there will be a \( j \in \{0, \ldots, n\} \) such that \( X_j = X \cup \{a_i \mid 0 < w_i \leq 1\} \). That is, the atom set leading to the least possible product will be checked.
\[ a_i \mid w_i \geq 1 > a_i \mid w_i < 1 \prec a_i \mid w_i = 0 \]
and furthermore for all \( a_i \in A(S) \) with \( w_i < 0 \), we have \( a_j \preceq a_k \) iff \( w_j \geq w_k \) for \( 1 \leq j, k \leq m \). In essence, we have to find the set \( Z \) with \( X \subseteq Z \subseteq Y \) such that the value \( \prod_{a_i \in Z \cap A(S)} w_i \) is \( \leq \)-minimal. Our ordering achieves this by considering first all weights greater than 1 (to reach the maximal absolute value) and then all negative weights. If there is an overall odd number of negative weights, all of them will contribute to the least possible product. If the number of negative weights is even, then the least possible overall product is obtained by taking all positive weights and all but one (the one with the least absolute value) negative weights.
The remaining cases with \( \circ \in \{<, \leq\} \) can be reduced to the cases above by multiplying the given inequality with \(-1\). \( \square \)
Corollary 7. Let $A$ be a vocabulary, $S$ be a weighted atom set over $A$ and $v \in \mathbb{R}$. The following Boolean functions are atom-orderable:
1. $\text{COUNT}(S) \circ v$ for $\circ \in \{<,\leq,=,\neq,\geq,>\}$;
2. $\text{SUM}(S) \circ v$ for $\circ \in \{<,\leq,=,\geq,>\}$;
3. $\text{AVG}(S) \circ v$ for $\circ \in \{<,\leq,=,\geq,>\}$;
4. $\text{MIN}(S) \circ v$ for $\circ \in \{<,\leq,=,\neq,\geq,>\}$;
5. $\text{MAX}(S) \circ v$ for $\circ \in \{<,\leq,=,\neq,\geq,>\}$;
6. $\text{PROD}(S) \circ v$ for $\circ \in \{<,\leq,=,\geq,>\}$.
This result is optimal, as model checking is coNP-hard for the cases $\text{SUM}(S) \neq v$, $\text{AVG}(S) \neq v$ and $\text{PROD}(S) \neq v$ (Pelov 2004; Son and Pontelli 2007). As a final note, we observe that the class of atom-orderable Boolean functions is not closed under common set operations: for example, $\text{SUM}(S) > v \cup \text{SUM}(S) < v$ is equivalent to $\text{SUM}(S) \neq v$.
Related Work
Properties of logic programs with Boolean functions have been analysed extensively in the literature. Among the several semantics that were proposed (Ferraris 2011; Faber, Pfeifer, and Leone 2011; Gelfond and Zhang 2014), we have considered the one by Pelov, Denecker, and Bruynooghe (2007) and Son and Pontelli (2007) with the aim of extending the currently largest class of Boolean functions for which the stability check is tractable. In fact, concerning stable models by Ferraris (2011) and Faber, Pfeifer, and Leone (2011), it is known that convex Boolean functions are the complexity boundary for this task (Alviano and Faber 2013). Moreover, concerning stable models by Gelfond and Zhang (2014), it is known that the task is tractable in general if disjunction in rule heads is forbidden (Alviano and Leone 2015).
Complexity of logic programs with Boolean functions can be analysed by considering each specific case by itself (Pelov 2004; Son and Pontelli 2007), or by identifying some semantic classes such as monotone, antimonotone and convex that cover practical cases (Pelov 2004; Liu and Truszczynski 2006; Faber, Pfeifer, and Leone 2011). In this paper, we followed this second approach and introduced the notion of bipolarity in logic programming. Even if the definition stems from ADFs (Brewka and Woltran 2010), it is interesting to observe that many common aggregates are actually bipolar, as shown in Proposition 1. This is an original result, which eventually provides an alternative proof for several complexity results by Son and Pontelli (2007).
Since other known tractability results are not covered by the class of bipolar Boolean functions, we also introduced an extended class of atom-orderable Boolean functions, and proved that the missing cases fall in this class (see Theorem 6). Interesting cases are those associated with PROD, originally considered by Pelov (2004). In fact, several algorithms were given by Pelov (2004, Figures 5.1–5.3) in order to show tractability of model checking for Boolean functions induced by PROD. Within our approach, we only had to show the existence of an ordering for the aggregate atoms with the desired properties (see the proof of Theorem 6).
Stable models by Pelov, Denecker, and Bruynooghe (2007) and Son and Pontelli (2007) were recently extended to the disjunctive case by Shen et al. (2014). The notion of bipolar and atom-orderable Boolean functions can also be used in the disjunctive case, and the complexity results are expected to extend as well in all cases in which the disjunction is not a complexity source itself (for example, in the head-cycle free case; Ben-Eliyahu and Dechter, 1994).
The notion of bipolarity, and even more that of atom-orderability, may be useful for other constructs such as HEX atoms (Eiter et al. 2014), whose semantics is also defined by means of Boolean functions. In fact, knowing that an HEX atom is atom-orderable may allow to implement a more efficient evaluation algorithm depending on the desired semantics (Shen et al. 2014).
Recently, Strass (2015) presented a syntactic counterpart of bipolar Boolean functions, that is, a subset of the formulas of classical propositional logic whose elements have all and only bipolar Boolean functions associated to them. (Roughly, these “bipolar formulas” are in negation normal form and no propositional atom may occur both positively and negatively in the formula.) It would certainly be useful to have a syntactic counterpart of atom-orderable functions.
Discussion
Boolean functions are among the most used extensions of logic programs. Identifying classes of Boolean functions with good computational properties is important from a practical viewpoint because any concrete implementation must face the complexity of the model checking problem. In this work, we introduced a unifying semantic class covering all known tractable cases. It is called atom-orderable because its main property is that the Boolean function’s arguments – its input atoms – can be ordered so that model checking can be efficiently done by evaluating Boolean functions with respect to linearly many sets of atoms. For common aggregates such an ordering is also efficiently computable, while in general the language can be extended by allowing the user to specify the ordering along with each Boolean function in the input program.
There are other advantages resulting from our approach. In fact, tractability of other aggregates can be easily proved by showing membership in the class of atom-orderable Boolean functions. This is the case, for example, of the median, that is, the number separating the higher half of a data sample from the lower half. It can be observed that $\text{MEDIAN}(S) \circ v$, for $\circ \in \{<,\leq,=,\geq,>\}$, is atom-orderable: for $S = \{a_1 : \alpha_1, \ldots, a_m : \alpha_m\}$, the ordering $\prec$ is such that $a_i \prec a_j$ if $\alpha_i \neq v$ and $a_j \circ v$. It is also interesting to note that the missing cases, that is, $\text{MEDIAN}(S) \circ v$ with $\circ \in \{=,\neq\}$, can be captured by slightly extending the class of atom-orderable Boolean functions. In fact, in this case the aggregate atoms can be ordered by increasing weight, but in order to obtain a sound model checking procedure the ordering has to be checked in two directions, ascending and descending. It is natural to generalise atom-orderable
Acknowledgements. Mario Alviano was partly supported by MIUR within project “SI-LAB BA2KOWN – Business Analytics to Know”, by Regione Calabria, POR Calabria FESR 2007-2013, within project “ITravel PLUS” and project “KnowRex”, by the National Group for Scientific Computation (GNCS-INDAM), and by Finanziamento Giovani Ricercatori UNICAL.
References
|
{"Source-Url": "http://www.informatik.uni-leipzig.de/~strass/2016/AAAI16-2628.pdf", "len_cl100k_base": 9761, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 33736, "total-output-tokens": 11993, "length": "2e13", "weborganizer": {"__label__adult": 0.0004422664642333984, "__label__art_design": 0.0005288124084472656, "__label__crime_law": 0.0007734298706054688, "__label__education_jobs": 0.0026683807373046875, "__label__entertainment": 0.00013124942779541016, "__label__fashion_beauty": 0.00024306774139404297, "__label__finance_business": 0.0004963874816894531, "__label__food_dining": 0.000736236572265625, "__label__games": 0.0009603500366210938, "__label__hardware": 0.0010805130004882812, "__label__health": 0.0013189315795898438, "__label__history": 0.0004472732543945313, "__label__home_hobbies": 0.00023043155670166016, "__label__industrial": 0.0009007453918457032, "__label__literature": 0.0007901191711425781, "__label__politics": 0.0005283355712890625, "__label__religion": 0.0006623268127441406, "__label__science_tech": 0.2607421875, "__label__social_life": 0.00017964839935302734, "__label__software": 0.010467529296875, "__label__software_dev": 0.71435546875, "__label__sports_fitness": 0.0003190040588378906, "__label__transportation": 0.0009407997131347656, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37431, 0.01804]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37431, 0.60991]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37431, 0.81981]], "google_gemma-3-12b-it_contains_pii": [[0, 5070, false], [5070, 11976, null], [11976, 18342, null], [18342, 22782, null], [22782, 26460, null], [26460, 32835, null], [32835, 37431, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5070, true], [5070, 11976, null], [11976, 18342, null], [18342, 22782, null], [22782, 26460, null], [26460, 32835, null], [32835, 37431, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37431, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37431, null]], "pdf_page_numbers": [[0, 5070, 1], [5070, 11976, 2], [11976, 18342, 3], [18342, 22782, 4], [22782, 26460, 5], [26460, 32835, 6], [32835, 37431, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37431, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
1aa96f7f3e72dd8c23898be271e7322988514716
|
CoQuiAAS: A Constraint-based Quick Abstract Argumentation Solver
Jean-Marie Lagniez Emmanuel Lonca Jean-Guy Maillly
CRIL – U. Artois, CNRS
Lens, France
{lagniez,lonca,mailly}@cril.univ-artois.fr
Abstract—Nowadays, argumentation is a salient keyword in artificial intelligence. The use of argumentation techniques is particularly convenient for thematics such that multiagent systems, where it allows to describe dialog protocols (using persuasion, negotiation, ...) or on-line discussion analysis; it also allows to handle queries where a single agent has to reason with conflicting information (inference in the presence of inconsistency, inconsistency measure). This very rich framework gives numerous reasoning tools, thanks to several acceptability semantics and inference policies.
On the other hand, the progress of SAT solvers in the recent years, and more generally the progress on Constraint Programming paradigms, lead to some powerful approaches that permit tackling theoretically hard problems.
The needs of efficient applications to solve the usual reasoning tasks in argumentation, together with the capabilities of modern Constraint Programming solvers, lead us to study the encoding of usual acceptability semantics into logical settings. We propose diverse use of Constraint Programming techniques to develop a software library dedicated to argumentative reasoning. We present a library which offers the advantages to be generic and easily adaptable. We finally describe an experimental study of our approach for a set of semantics and inference tasks, and we describe the behaviour of our solver during the First International Competition on Computational Models of Argumentation.
I. INTRODUCTION
An abstract argumentation framework [1] is a directed graph where the nodes represent abstract entities called arguments and the edges represent attacks between these arguments. This simple and elegant setting is used as well in processes concerning a single agent as in multiagent scenarios. Concerning single agent cases, the agent may have to reason from conflicting pieces of information, which leads her to build an argumentation framework from an inconsistent knowledge base to infer non-trivial conclusions [2]. In multiagent settings, argumentation is used to model dialogs between several agents [3] or to analyze on-line discussion between social network users [4]. The meaning of such a graph is determined by an acceptability semantics, which indicates which properties a set of arguments must satisfy to be considered as a "solution" of the problem; such a set of arguments is then called an extension.
Currently, a strong tendency in the argumentation community is the development of software approaches to compute the different reasoning tasks on argumentation frameworks, with respect to the usual semantics\(^1\). Given a semantics \(\sigma\) and an argumentation framework \(F\), the most usual requests consist in computing one (or every) \(\sigma\)-extension of \(F\), and determining if an argument belongs to at least one (or every) \(\sigma\)-extension of \(F\). It is well-known that most of the pairs composed of a semantics and a request lead to a high complexity problem [6], [7]; thus, these tasks require the development of practically efficient tools to be computed in a reasonable time. To reach this goal, we propose in this article the use of Constraint Programming techniques, since this domain already proposes some very efficient solutions to solve high complexity combinatorial problems. In this paper, we are in particular interested in propositional logic and some formalisms derived from it. More precisely, we propose some encodings in conjunctive normal form (CNF) to solve problems from the first level of the polynomial hierarchy, and some encodings in the Partial Max-SAT formalism for higher complexity problems; these encodings allow us to solve reasoning tasks concerning four usual semantics and four usual requests. We take advantage of these encodings to solve these reasoning tasks, using some state-of-the-art approaches and software, which have proven their practical efficiency.
We have encoded those approaches for argumentation-based reasoning in a software library called CoQuiAAS. The aim of CoQuiAAS is twofold. First, we provide some efficient algorithms to tackle the main requests for the usual semantics. Then, our framework is designed to be upgradable: one may easily add some new parameters (request, semantics), or realize new algorithm for the tasks which are already implemented.
In this paper, we first present the basic notions concerning argumentation and the problems we provide reductions to: the Satisfiability problem – also known as SAT problem, which consist in deciding whether a propositional formula admits a model – and the search of Maximal Satisfiable Subsets (MSS) of constraints in a Partial Max-SAT instance. After this presentation, we detail the encodings we employed to translate argumentation problems into SAT and MSS problems. We present in the section IV the design of the library we provide, CoQuiAAS. At last, we give some experimental results of our approaches in section V, and we compare the conception and the request handled by our library to some existing softwares which tackle argumentation-based reasoning problems. The strength of CoQuiAAS is confirmed by the award received at the ICCMA'15.
\(^1\)See [5] for more details.
A. Abstract Argumentation
Several models have been used to formalize argument-based reasoning. In this paper, we consider Dung’s framework, which is one of the most well-known settings for argumentation related problems [1].
Definition 1. An argumentation framework is a directed graph \( F = \langle A, R \rangle \) where \( A \) is a finite set of abstract entities called arguments and \( R \subseteq A \times A \) a binary relation called attack relation.
The intuitive meaning of the attack relation comes from the usual proceedings of a debate. If a first argument \( a_1 \) is put forward without any contradiction, there is nothing to prevent an agent from considering \( a_1 \) as true. But, if someone puts forwards an argument \( a_2 \) (which is a priori acceptable) which attacks \( a_1 \), then \( a_1 \) cannot be accepted anymore, unless it is then defended. This intuitive notion of defense can be formalized as follows.
Definition 2. Let \( F = \langle A, R \rangle \) be an argumentation framework and \( a_1, a_2, a_3 \in A \) three arguments.
- The argument \( a_1 \) is attacked by the argument \( a_2 \) in \( F \) if and only if \( (a_2, a_1) \in R \).
- Then we says that \( a_3 \) defends \( a_1 \) against \( a_2 \) in \( F \) if and only if \( (a_3, a_2) \in R \).
These notions are generalized to attack and defense by a set of arguments \( E \subseteq A \).
- The argument \( a_1 \) is attacked by the set of arguments \( E \subseteq A \) in \( F \) if and only if \( \exists a_i \in E \) such that \( (a_i, a_1) \in R \).
- The set of arguments \( E \subseteq A \) defends \( a_1 \) against \( a_2 \) in \( F \) if and only if \( E \) attacks \( a_2 \).
For instance, in the argumentation framework described at Figure 1, the argument \( a_1 \) attacks the argument \( a_2 \), and \( a_2 \) defends himself against this attack. Intuitively, one may want at most one of these two arguments to be accepted, as considering one as accepted makes the second one attacked.

When reasoning with an argumentation framework, an agent has to determine which arguments can jointly be accepted. Several properties may be defined for a set of arguments to be considered as a reasonable “solution” of the argumentation framework. Among these properties, two in particular are required by all the usual semantics of the literature:
- **conflict-freeness**: \( E \subseteq A \) is conflict-free in \( F \) if and only if \( \exists a_i, a_j \in E \) such that \( (a_i, a_j) \in R \);
- **admissibility**: a conflict-free set \( E \subseteq A \) is admissible if and only if \( \forall a_i \in E \), \( E \) defends \( a_i \) against all its attackers.
Conflict-freeness and admissibility are required by each acceptability semantics proposed by Dung.
Definition 3. Let \( F = \langle A, R \rangle \) be an argumentation framework.
- A conflict-free set \( E \subseteq A \) is a complete extension of \( F \) if and only if \( E \) contains each argument that is defended by \( E \).
- A set \( E \subseteq A \) is a preferred extension of \( F \) if and only if \( E \) is a maximal element (with respect to \( \subseteq \)) among the complete extensions of \( F \).
- A conflict-free set \( E \subseteq A \) is a stable extension of \( F \) if and only if \( E \) attacks each argument which does not belong to \( E \).
- A set \( E \subseteq A \) is a grounded extension of \( F \) if and only if \( E \) is a minimal element (with respect to \( \subseteq \)) among the complete extensions of \( F \).
Given a semantics \( \sigma \), \( Ext_\sigma(F) \) denotes the \( \sigma \)-extensions of \( F \). The previous semantics are commonly denoted, respectively, \( CO, PR, ST \) and \( GR \).
These four acceptability semantics are illustrated in Section III for the example presented at Figure 1.
Dung proved that for each argumentation framework \( F \),
- \( F \) has exactly one grounded extension;
- \( F \) has at least one preferred extension;
- each stable extension of \( F \) is a preferred extension of \( F \);
- \( F \) may admit no stable extension.
Given a semantics \( \sigma \), several decision problems can be considered. First, an interesting question is to know if a given argument \( a \) is skeptically or credulously accepted. The status of \( a \) is given by the following definitions:
- \( F \) accepts \( a \) skeptically with respect to the semantics \( \sigma \) if and only if \( \forall \epsilon \in Ext_\sigma(F) \), \( a \in \epsilon \);
- \( F \) accepts \( a \) credulously with respect to the semantics \( \sigma \) if and only if \( \exists \epsilon \in Ext_\sigma(F) \) such that \( a \in \epsilon \).
Obviously, both these statuses collapse for the grounded semantics, since an argumentation framework possesses exactly one grounded extension. The skeptical acceptance (respectively credulous acceptance) decision problem is commonly denoted by \( DC \) (respectively \( DS \)).
Another interesting decision problem is \( Ezist \), which is the problem of determining whether a non-empty extension exists or not for the given semantics.
The complexity of these decision problems is summed up in Table I (which gathers results from other publications [6], [7], [8]).
Obviously, the complexity $\text{Exist}$ is a lower bound of the complexity of the computation of an extension, while the complexity of $DS$ is a lower bound of the enumeration of the extensions.
B. Propositional Logic
The alphabet of propositional logic is the combination of a set of Boolean variables (generally denoted $PS$) and a set of three usual operators: the negation operator $\neg$ (unary), the conjunction operator $\land$ (binary) and the disjunction operator $\lor$ (binary), each of them being used to connect formulae (the variables themselves are atomic formulae). Insofar as a propositional formula is built in an inductive way (since the connectives apply on formulae), it can be seen as a rooted, directed, acyclic graph. Given an assignment of the set of propositional variables (called directed, acyclic graph. Given an assignment of the set of propositional variables (called interpretation), a propositional formula is evaluated to $true$ if and only if the root node of the formula is evaluated to $true$. In order to determine the value of this node given by an interpretation, one may compute the value of each node in reverse topological order; indeed, the value of the leaves is known – since the leaves are the nodes which correspond to the variables, which truth value is given by the interpretation – and the values of internal nodes is determined by the semantics of the corresponding connectives: a negation node ($\neg$) has the value $true$ if and only if its child has the value $false$, a node $\land$ (respectively $\lor$) has the value $true$ if and only if both its children (respectively at least one of its children) have (respectively has) the value $true$. When an interpretation leads a formula to be $true$, we call it a model of the formula. A formula is said to be consistent if and only if it admits at least one model. As a convention, we represent an interpretation by the set of variables which are $true$ with respect to this interpretation. In addition to the previous connectives, we define the implication ($\Rightarrow$, defined by $a \Rightarrow b \equiv \neg a \lor b$) and equivalence ($\Leftrightarrow$, defined by $a \Leftrightarrow b \equiv (a \Rightarrow b) \land (b \Rightarrow a)$) connectives. When a propositional formula $\Phi$ is equivalent to a conjunction $\phi_1 \land \ldots \land \phi_n$, we can represent $\Phi$ as a set $\{\phi_1, \ldots, \phi_n\}$.
In our study, we deal with encodings in NNF formulae, meaning some propositional formulae where the negation operator is only applied on variables (that is, the leaves of the formula). However, CoQuiAAS uses SAT solvers, which are only able to tackle propositional formulae in conjunctive normal form, so a translation step from NNF to CNF is required between the encodings which exist in the literature and the ones that we actually use in our software library. It does not influence the generality of our approaches, since each propositional formula can be translated in polynomial time into an equivalent CNF formula.
The CNF formulae correspond to conjunctions of disjunctions of literals (a literal is a propositional variable or its negation), that is formulae written as $\land_{i=1}^{n} (\lor_{j=1}^{m} x_{i,j})$. These formulae are interesting since a disjunction of literals (called a clause) allows to represent a constraint in a simple manner; so, a CNF formula is in fact a set of constraints, and a model a such a formula is an assignment of a truth value to each variable such that no constraint of the problems is violated.
Although this formalism is well suited to represent a problem, searching a model of a CNF formula is theoretically complex (NP-hard [9]). However, the modern SAT solvers are able to solve these problems very efficiently, tackling gradually more and more imposing ones [10]. There exists problems which do not have any model, meaning that there is no interpretation such that each constraint is satisfied. In this case, an interesting question is to determine an interpretation which maximizes the number of satisfied constraints: this problem is called Max-SAT [10]. We can generalize this problem, giving a weight to each constraint – now the question is to maximize the sum of the weights of the satisfied constraints – this is the problem Weighted Max-SAT. If some constraints have an infinite weight (which means that they have to be satisfied), then the problems are said to be "partial": we thus consider the problems Partial Max-SAT and Weighted Partial Max-SAT.
Discovering an optimal solution of a Max-SAT instance allows to determine a set of constraints from the initial formula which is consistent, such that adding any other constraint from the initial problem makes this new problem inconsistent [11]; a set of constraints which has this property is called a maximal satisfiable subset (MSS) [12]. Given the set of constraints $\phi$ of a problem and a set of constraints $\psi$ which is a MSS of $\phi$, we say that $\psi = \phi \setminus \psi$ is a coMSS (or MCS) of the formula [11]. We remark that the optimal solutions of the Max-SAT problem are only a subset of all the MSS of a formula.
It is interesting to notice that the algorithms developed in Constraint Programming to solve Max-SAT problems or to extract a MSS of a formula generally use a classic SAT solver based on the Minisat incremental interface as a black box [13], [14], [15] to perform the search using several consistency tests in a successive way. Concerning MSS extraction through such a solver, we can for instance mention the algorithms BLS [16] and CMP [17]. Let us conclude by noticing that there exists some softwares dedicated to MSS extraction which use a Max-SAT solver as a black box [11].
III. LOGICAL ENCODINGS FOR ABSTRACT ARGUMENTATION
The literature already contains some examples of encodings which allow the translation of some usual requests of argumentation into propositional logic [18]. We take advantage of the encodings proposed by Besnard and Doutre to propose some approaches allowing to compute the extensions of an argumentation framework, and also to determine if an argument is skeptically or credulously accepted by an argumentation framework. Our encodings are based on the language of the NNF formulae, defined with the usual connectives on the set of Boolean variables $V_A = \{a_i \mid a_i \in A\}$. Concerning the set of propositional variables, $x_{a_i}$ denotes the fact the argument $a_i$ is accepted by the given argumentation framework. For a matter a readability, we use in the following $a_i$ rather than $x_{a_i}$.
<table>
<thead>
<tr>
<th>Semantics</th>
<th>GR</th>
<th>ST</th>
<th>PR</th>
<th>CO</th>
</tr>
</thead>
<tbody>
<tr>
<td>DC</td>
<td>P</td>
<td>NP$\equiv$ c</td>
<td>NP$\equiv$ c</td>
<td>NP$\equiv$ c</td>
</tr>
<tr>
<td>DS</td>
<td>P</td>
<td>coNP$\equiv$ c</td>
<td>P$\equiv$ c</td>
<td>P$\equiv$ c</td>
</tr>
<tr>
<td>Exist</td>
<td>P</td>
<td>NP$\equiv$ c</td>
<td>NP$\equiv$ c</td>
<td>NP$\equiv$ c</td>
</tr>
</tbody>
</table>
Let us first recall the encoding of stable semantics defined in [18].
**Proposition 1.** Let \( F = (A, R) \) be an argumentation framework. \( E \subseteq A \) is a stable extension of \( F \) if and only if \( E \) is a model of the formula below:
\[
\Phi^F_{st} = \bigwedge_{a_i \in A} \left[ a_i \iff \left( \bigwedge_{a_j \in A|\{a_i\}, a_i \in R} \neg a_j \right) \right]
\]
In addition to the computation of a single extension, this encoding also allows us to answer other well-known requests for the stable semantics, such that the enumeration of the whole set of extension or the acceptability states of the arguments.
**Proposition 2.** Let \( F = (A, R) \) be an argumentation framework and \( a_i \in A \) an argument.
- Computing a stable extension of \( F \) is equivalent to the computation of a model of \( \Phi^F_{st} \).
- Enumerating the stable extensions of \( F \) is equivalent to the enumeration of the models of \( \Phi^F_{st} \).
- Determining if \( a_i \) is credulously accepted by \( F \) with respect to the stable semantics is equivalent to determine the consistency of \( \Phi^F_{st} \land a_i \).
- Determining if \( a_i \) is skeptically accepted by \( F \) with respect to stable semantics is equivalent to determine if \( \Phi^F_{st} \land \neg a_i \) is inconsistent.
**Example 1.** When instantiating \( \Phi^F_{st} \) with the argumentation framework given at Figure 1, we obtain the formula
\[
\begin{align*}
(a_1 \iff \neg a_2),
(a_2 \iff \neg a_1 \land \neg a_3 \land \neg a_4),
(a_3 \iff \neg a_4),
(a_4 \iff \neg a_3)
\end{align*}
\]
where the models are \( \{a_1, a_3\} \) and \( \{a_1, a_4\} \), which correspond to the stable extensions of \( F \).
As we noticed previously, SAT solvers only deal with CNF formulae. To address this issue, we translate the NNF formula given above into the CNF formula \( \Psi^F_{st} \) given below.
\[
\Psi^F_{st} = \bigwedge_{a_i \in A} \left( a_i \lor \bigvee_{a_j \in A|\{a_i\}} a_j \right),
\bigwedge_{a_i \in A} \left( \bigwedge_{a_j \in A|\{a_i\}, a_i \in R} \neg a_j \right)
\]
Similarly to the stable semantics, Besnard and Doutre proposed an NNF encoding for the complete semantics.
**Proposition 3.** Let \( F = (A, R) \) be an argumentation framework. \( E \subseteq A \) is a complete extension of \( F \) if and only if \( E \) is a model of the formula below:
\[
\Phi^F_c = \bigwedge_{a_i \in A} [a_i \iff (\bigwedge_{a_j \in A|\{a_i\}, a_i \in R} \neg a_j)]
\]
Then, as well as the stable semantics, we translate this NNF formula into a CNF one to be able to use SAT solvers in order to handle our requests. This time, we add additional variables \( P_a \) defined as equivalent to the disjunction of the attackers of the argument \( a \). These auxiliary variables allow us to write a CNF formula \( \Psi^F_c \), such that there is a bijection between the models of \( \Phi^F_c \) and the models of \( \Psi^F_c \), in a more elegant way than a naive translation from NNF into CNF.
\[
\Psi^F_c = \bigwedge_{a_i \in A} \left( \neg a_i \lor \bigvee_{a_j \in A|\{a_i\}} a_j \right),
\bigwedge_{a_i \in A} \left( \bigwedge_{a_j \in A|\{a_i\}, a_i \in R} \neg a_j \right),
\bigwedge_{a_i \in A} \left( \bigwedge_{a_j \in A|\{a_i\}} a_j \right),
\bigwedge_{a_i \in A} \left( \bigwedge_{a_j \in A|\{a_i\}, a_i \in R} (\neg a_i \lor \bigvee_{a_j \in A|\{a_i\}} a_j) \right)
\]
**Proposition 4.** Let \( F = (A, R) \) be an argumentation framework and \( a_i \in A \) an argument.
- Computing a complete extension of \( F \) is equivalent to the computation of a model of \( \Phi^F_c \).
- Enumerating the complete extensions of \( F \) is equivalent to the enumeration of the models of \( \Phi^F_c \).
- Determining if \( a_i \) is credulously accepted by \( F \) with respect to complete semantics is equivalent to determine the consistency of \( \Phi^F_c \land a_i \).
- Determining if \( a_i \) is skeptically accepted by \( F \) with respect to complete semantics is equivalent to determine if \( \Phi^F_c \land \neg a_i \) is inconsistent.
**Example 2.** When instantiating \( \Phi^F_c \) with the argumentation framework given at Figure 1, we obtain the formula
\[
\begin{align*}
(a_1 \Rightarrow \neg a_2) \land (a_1 \iff a_3 \lor a_4),
(a_2 \Rightarrow \neg a_1 \land \neg a_3 \land \neg a_4) \land (a_2 \iff a_2 \land a_4 \land a_3),
(a_3 \Rightarrow \neg a_4) \land (a_3 \iff a_3),
(a_4 \Rightarrow \neg a_3) \land (a_4 \iff a_4)
\end{align*}
\]
where the models are the models of \( \Phi^F_c \), together with \( \{a_1\} \) and \( \emptyset \). Thus, the complete extensions of \( F \) are \( Ext_c(F) = \{\{a_1, a_3\}, \{a_1, a_4\}, \{a_1\}, \emptyset\} \).
The notions of minimality and maximality with respect to \( \subseteq \) are not easy to express in propositional logic. So, we simply define a grounded extension (respectively preferred) as a minimal (respectively maximal) model with respect to \( \subseteq \) of \( \Phi^F_c \). We notice that we do not have to compute minimal or maximal models to tackle the grounded. Indeed, applying the unit propagation on \( \Phi^F_c \) – at decision level 0, that is the literals propagated without any assumption – proves enough to compute the grounded extension.
**Proposition 5.** Let \( F = (A, R) \) be an argumentation framework and \( a_i \in A \) an argument.
- Computing the (only) grounded extension of \( F \) is equivalent to compute the literals propagated at decision level 0 in \( \Phi^F_c \).
- Determining if \( a_i \) is accepted (both credulously and skeptically) by \( F \) with respect to grounded semantics is equivalent to determine if \( a_i \) is propagated at decision level 0 in \( \Phi^F_c \).
Computing the preferred extensions of $F$ require a slightly different encoding. We remark that a maximal model of $\Phi_{co}$ is a MSS of the weighted formula $\Phi_{pr}^F$ defined below.
**Proposition 6.** Let $F = (A, R)$ be an argumentation framework. $E \subseteq A$ is a preferred extension of $F$ if and only if $E$ is a MSS of the weighted formula
$$\Phi_{pr}^F = \{(\Phi_{co}^F, +\infty), (a_1, 1), \cdots, (a_n, 1)\}$$
It is not necessary to extract a MSS of a Partial Max-SAT instance for each request related to the preferred semantics. Indeed, determining if an argument is credulously accepted is known to be $\text{NP}$-complete for the preferred semantics. It is in fact exactly equivalent to determine if it is credulously accepted for the complete semantics.
**Proposition 7.** Let $F = (A, R)$ be an argumentation framework and $a_i \in A$ an argument.
- Computing a preferred extension of $F$ is equivalent to the computation of a MSS of $\Phi_{pr}^F$.
- Enumerating the preferred extensions of $F$ is equivalent to enumerate the whole set of MSSes of $\Phi_{pr}^F$.
- Determining if $a_i$ is credulously accepted by $F$ with respect to the preferred semantics is equivalent to determine the consistency of $\Phi_{co}^F \land a_i$.
- Determining if $a_i$ is skeptically accepted by $F$ with respect to the preferred semantics is equivalent to determine if $a_i$ belongs to each MSS of $\Phi_{pr}^F$.
**Example 4.** Coming back to the argumentation framework $F$ given at Figure 1, $\Phi_{pr}^F$ is the weighted formula
$$\{(\Phi_{co}^F, +\infty), (a_1, 1), (a_2, 1), (a_3, 1), (a_4, 1)\}$$
whose the MSS are $\{a_1, a_3\}$ and $\{a_1, a_4\}$, which are the preferred extensions of $F$.
IV. CoQuiAAS : DESIGN OF THE LIBRARY
We have chosen the language C++ to implement CoQuiAAS to take advantage of the Object Oriented Programming (OOP) paradigm and its good computational efficiency. First, the use of OOP allows us to give CoQuiAAS an elegant conception, which is well suited to maintain and upgrade the software. Moreover, C++ ensures having high computing performances, which is not the case of some other OOP languages. At last, it makes easier the integration of coMSSExtractor, a C++ underlying tool we used to solve the problems under consideration.
coMSSExtractor [17] is a software dedicated to extract MSS/coMSS pairs from a Partial Max-SAT instance. As coMSSExtractor integrates the Minisat SAT solver [19] – which is used as a black box to compute MSSes – the API provided by coMSSExtractor allows us to use the API provided by Minisat to handle the requests that require a simple SAT solver. This way, CoQuiAAS does not need a second solver to compute the whole set of requests it is attended to deal with.
The core of our library is the interface Solver, which contains the high-level methods required to solve the problems. The method initProblem makes every required initialization given the input datas. In the case of our approaches, it initializes the SAT solver or the coMSS extractor with the logical encoding corresponding to the argumentation framework, the semantics and the reasoning task to perform. The initialization step depends on the concrete realization of the Solver interface returned by the SolverFactory class, given the command-line parameters of CoQuiAAS. The method computeProblem is used to compute the result of the problem, and displaySolution prints the result into the dedicated output stream using the format expected by the First International Competition on Computational Models of Argumentation.
The abstract class SATBasedSolver (respectively CoMSSBasedSolver) gathers the features and initialization common to each solver based on a SAT solver (respectively a coMSS extractor), for instance the method hasAModel which returns a Boolean indicating if the SAT instance built from the argumentation problem is consistent or not. Among the subclasses of SATBasedSolver, we built DefaultSATBasedSolver and its subclasses, which are dedicated to use the API of coMSSExtractor to take advantage of its SAT solver features, inherited from Minisat, to solve the problems. If the user wants to call any other SAT solver rather than coMSSExtractor – as soon as the given semantics is compatible with SAT encodings – a command-line option leads the SolverFactory to generate an instance of the class ExternalSATBasedSolver, which also extends SATBasedSolver. This solver class is initialized with a command to execute so as to call any external software able to read a CNF formula written in the DIMACS format, and to print a solution using the format of SAT solvers competitions. This class allows to execute the command provided to CoQuiAAS to perform the computation related to the problem. This feature enables, for instance, the comparison between the relative efficiency of several SAT solvers on the argumentation instances. The same pattern is present in the coMSS-based part of the library, with the class CoMSSBasedSolver, which can be instantiated via the default solver DefaultCoMSSBasedSolver, which uses coMSSExtractor, or via the class ExternalCoMSSBasedSolver.
to use any external software whose input and output correspond to coMSSExtractor ones, for the pairs request/semantics corresponding to our coMSS-based approaches.
Our design is flexible enough to make CoQuiAAS evolutive. For instance, it is simple to create a solver based on the API of another SAT solver than coMSSExtractor: creating a new class MySolver which extends SATBasedSolver (and also, the interface, Solver which is the root of each solver) and implementing the required abstract methods (initProblem, hasAModel, getModel and addBlockingClause) is the only work needed. It is also possible to extend directly the class Solver and to implement its methods initProblem, computeProblem and displaySolution to create any kind of new solver. For instance, if we want to develop a CSP-based approach for argumentation-based reasoning, using encodings such that those from [20], we just need to add a new class CSPBasedSolver which implements the interface Solver, and to reproduce the process which lead to the conception of the SAT-based solvers, but using this time the API of a CSP solver (or an external CSP solver).
Once the solver written, we just need to give an option to the command-line which executes CoQuiAAS, and to update the method getSolverInstance in the SolverFactory, which knows the set of the command-line parameters (stored in the map opt). For instance, the parameter -solver MySolver can be linked to the use of the class MySolver dedicated to the new solver. The code given below is sufficient to do that.
```java
if (!opt["-solver"].compare("MySolver"))
return new MySolver(...);
```
In the way we conceived the interface Solver, it is supposed that a solver is dedicated to a single problem and a single semantics. Thus, it is possible to implement a class which executes a unique algorithm, suited to a single pair (problem,semantics). For instance, [21] describes a procedure which determines if a given argument belongs to the grounded extension of an argumentation framework. We can consider the possibility to implement a class GroundedDiscussion which realizes the interface Solver to solve the skeptical decision problem under the grounded semantics using this dedicated algorithm.
This default behaviour of CoQuiAAS does not prevent the implementation of classes able to deal with several request for a given semantics, as soon as the SolverFactory returns an instance of the right solver for the considered semantics. Thus, since to the possibility to tackle each problem for a given semantics through a SAT instance (or a MSS problem), we have simplified the design of our solvers using a single class for each semantics, taking advantage of the template design pattern. For instance, the method computeProblem in the class CompleteSemanticSolver is implemented as described at Algorithm 1.
V. EXPERIMENTAL RESULTS
We have lead some experiments on the benchmarks provided by the organizers of the First International Competition on Computational Models of Argumentation to test the solvers before the competition. The first set of test cases contains a family of 20 instances said real, whose the number of arguments vary between 5000 and 100,000, and 79 random instances whose the number of arguments vary between 20 and 1000. The second set of test cases contains random instance whose the number of arguments vary between 200 and 400. CoQuiAAS has been executed on computers equipped with 3.0 GHz Intel Xeon processors, with 2.0 GB RAM, and the GNU/Linux distribution CentOS 6.0 (64 bits). The timeout for each instance was set to 900 seconds.
We have given some priority to the study of practical efficiency of our approach for enumeration problems, since the time to enumerate the extensions of an argumentation framework is an upper bound for the time required by the other problems. The results are given at Table II. We have aggregated the times by family of instances, and we present here the average times. The symbol "−" indicates that the whole family has reached the timeout; the other families have been completely solved. These results correspond to the average runtime to initialize the problem (read the AF instance and translate it into a CNF formula) and to solve it.
<table>
<thead>
<tr>
<th>Family</th>
<th>#Inst</th>
<th>Gr</th>
<th>St</th>
<th>Pr</th>
<th>Co</th>
</tr>
</thead>
<tbody>
<tr>
<td>rdm20</td>
<td>25</td>
<td>< 0.01</td>
<td>0.01</td>
<td>< 0.01</td>
<td>< 0.01</td>
</tr>
<tr>
<td>rdm50</td>
<td>24</td>
<td>< 0.01</td>
<td>< 0.01</td>
<td>< 0.01</td>
<td>< 0.01</td>
</tr>
<tr>
<td>rdm200</td>
<td>24</td>
<td>< 0.01</td>
<td>0.5</td>
<td>5.32</td>
<td>1.57</td>
</tr>
<tr>
<td>rdm1000</td>
<td>6</td>
<td>0.25</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>real</td>
<td>20</td>
<td>7.25</td>
<td>7.51</td>
<td>8.55</td>
<td>6.88</td>
</tr>
<tr>
<td>XXX200</td>
<td>4</td>
<td>< 0.01</td>
<td>0.08</td>
<td>0.04</td>
<td>0.03</td>
</tr>
<tr>
<td>XXX300</td>
<td>64</td>
<td>0.02</td>
<td>12.53</td>
<td>34.8</td>
<td>21.36</td>
</tr>
<tr>
<td>XXX400</td>
<td>22</td>
<td>0.01</td>
<td>0.12</td>
<td>0.13</td>
<td>0.08</td>
</tr>
</tbody>
</table>
Our experiments show the efficiency of our approach on the competition instances, except for the instances rdm1000 which have reached the timeout without being solved, for the stable, complete and preferred semantics. The average time to solve XXX300 instances with these three semantics is particularly high compared to the average time required by the XXX400 instances for the same semantics. However, this global comparison hides the fact the difference between these two families are explained by the presence of some particularly hard instances in the family XXX300. Some of
them require several dozen of seconds, and even 280 seconds for the complete semantics and 653 seconds for the preferred semantics; however, the large majority of the instances are solved within a few seconds – 41 instances are solved in strictly less than one second for the complete semantics, and 37 instances for the preferred semantics.
VI. RELATED WORKS
The first experimental results, described in the previous section, show that CoQuiAAS is able to deal with large instances. The second interesting question about CoQuiAAS' efficiency is "how does it behave compared to the other existing softwares".
Indeed, several similar approaches have been developed in the recent years. ASPARTIX [22], first, proposes an implementation based on ASP techniques to compute the extensions of an argumentation framework, for many semantics. It does not provide directly a possibility to tackle skeptical and credulous decision. CEGARTIX [23], based on SAT techniques, focuses on the request whose complexity is at the second level of the polynomial hierarchy. Though it is efficient, the current version of this software is far less general than our library, which allows to tackle every usual request for every usual semantics, and which can be easily extended to work with other semantics. At last, ArgSemSAT [24] is also a SAT-based software, which allows the enumeration of the extension for the usual semantics. As far as we know, none of these softwares has been conceived with an easy integration of other kind of constraint solvers in mind. These three pieces of software were the most efficient ones to tackle argumentation issues before the competition ICCMA 2015.
Recently, the argumentation community has been interested in developing numerous approaches to solve argumentation problems. This has been motivated by the organisation of the ICCMA 2015: eighteen different solvers participated to the competition (including CoQuiAAS), tackling more or less tracks among the sixteen pairs (semantics, problem). Some of them were updated versions of the existing pieces of software, while many new solvers have been developed for the competition.
Among these solvers, eight are able to tackle the whole range of problems (semantics, tasks). After the aggregation of these eight solvers performances, CoQuiAAS received the award "First Place", thanks to its computational efficiency and its capacity to deal with each semantics and each inference problem of the competition. Roughly speaking, CoQuiAAS is the most efficient software among those which can deal with any argumentation problem. More details can be found on the website of the competition [5].
An interesting remark about the results of the competition is that the three awarded pieces of software are based on SAT technology (CoQuiAAS, ArgSemSAT and LabSAT-Solver [25]). This confirms that studying logical encodings of argumentation semantics is a promising approach to solve argumentation problems.
VII. CONCLUSION
In this paper, we present our approaches based on SAT and MSS extraction to solve the most usual inference problems
from an abstract argumentation framework. We put forward the
elegant design of our software library CoQuiAAS, which has been
developed to be easily maintainable and upgradeable. A
first version of our software is available on-line\textsuperscript{2}.
We also presented some preliminary experimentation results showing that
our library seems to be very efficient to solve the benchmarks
proposed by the argumentation community.
Several research tracks are planned as future work. First,
concerning the inference from an abstract argumentation
framework, there exists some semantics which have been pro-
posed after the four “classical” ones of Dung. Developing some
approaches for these semantics is an interesting challenge, in
particular for the semi-stable\textsuperscript{26} and the stage\textsuperscript{27} semantics,
for which even the credulous acceptance is at the second level
of the polynomial hierarchy.
We also plan to extend our work to different extensions of
Dung’s framework, such as Weighted Argumentation Frame-
works\textsuperscript{[28]}, Preference-based Argumentation Frameworks\textsuperscript{[29]}
or Value-based Argumentation Frameworks\textsuperscript{[30]}. Adding the
possibility to work with labellings\textsuperscript{[31]} rather than extensions
is also a natural future work.
At last, we want to improve the user’s experience. It would be
interesting to have a visualisation tool to work with argumenta-
tion frameworks, and to see the result of the requests performed
on these frameworks. We plan to develop a Graphical User
Interface, based on CoQuiAAS computing engine, to improve the
quality of the interactions between the user and the system.
Acknowledgment
This work benefited from the support of the project
AMANDE ANR-13-BS02-0004 of the French National Re-
search Agency (ANR).
This work is founded in part by the Conseil Régional Nord-Pas
de Calais and the FEDER program.
References
\begin{enumerate}
\item P. M. Dung, “On the acceptability of arguments and its fundamental role
in nonmonotonic reasoning, logic programming, and n-person games,”
\item P. Besnard and A. Hunter, Elements of Argumentation. MIT Press,
2008.
\item L. Amgoud and N. Hameurlain, “An argumentation-based approach for
\item J. Leite and J. Martins, “Social abstract argumentation,” in IJCAI’11,
2011, pp. 2287–2292.
\item M. Thimm and S. Villata, “First International Competition on Compu-
tational Models of Argumentation (ICCM’15),” 2015, see http://
argumentationcompetition.org/2015/.
\item S. Coste-Marquis, C. Devred, and P. Marquis, “Symmetric argumenta-
\item P. E. Dunne and M. Wooldridge, “Complexity of abstract argumenta-
tion,” in Argumentation in Artificial Intelligence. I. Rahwan and G. R.
\item W. Dvořák and S. Woltran, “On the intertranslatability of argumenta-
\textsuperscript{2}see http://www.cril.univ-artois.fr/coquiaas
|
{"Source-Url": "http://www.math-info.univ-paris5.fr/~jmailly/downloads/coquiaas_ictai15.pdf", "len_cl100k_base": 9971, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 33053, "total-output-tokens": 10614, "length": "2e13", "weborganizer": {"__label__adult": 0.00038909912109375, "__label__art_design": 0.0005097389221191406, "__label__crime_law": 0.0008745193481445312, "__label__education_jobs": 0.0020465850830078125, "__label__entertainment": 0.00016832351684570312, "__label__fashion_beauty": 0.00020694732666015625, "__label__finance_business": 0.00034689903259277344, "__label__food_dining": 0.00048160552978515625, "__label__games": 0.001544952392578125, "__label__hardware": 0.0006422996520996094, "__label__health": 0.0006880760192871094, "__label__history": 0.0003707408905029297, "__label__home_hobbies": 0.00011718273162841796, "__label__industrial": 0.000553131103515625, "__label__literature": 0.0008592605590820312, "__label__politics": 0.0006127357482910156, "__label__religion": 0.0006222724914550781, "__label__science_tech": 0.12213134765625, "__label__social_life": 0.0001939535140991211, "__label__software": 0.0208587646484375, "__label__software_dev": 0.8447265625, "__label__sports_fitness": 0.0004012584686279297, "__label__transportation": 0.0005335807800292969, "__label__travel": 0.00020706653594970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40099, 0.03118]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40099, 0.38153]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40099, 0.87723]], "google_gemma-3-12b-it_contains_pii": [[0, 5458, false], [5458, 10875, null], [10875, 17760, null], [17760, 23453, null], [23453, 28601, null], [28601, 33904, null], [33904, 37009, null], [37009, 40099, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5458, true], [5458, 10875, null], [10875, 17760, null], [17760, 23453, null], [23453, 28601, null], [28601, 33904, null], [33904, 37009, null], [37009, 40099, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40099, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40099, null]], "pdf_page_numbers": [[0, 5458, 1], [5458, 10875, 2], [10875, 17760, 3], [17760, 23453, 4], [23453, 28601, 5], [28601, 33904, 6], [33904, 37009, 7], [37009, 40099, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40099, 0.06726]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
e7bb50eff94a4e60c53597c3bd265f67aa1ba47a
|
DEVELOPMENT OF A TASK ANALYSIS TOOL TO FACILITATE USER INTERFACE DESIGN
PREPARED BY: Dr. Jean C. Scholtz
ACADEMIC RANK: Assistant Professor
UNIVERSITY AND DEPARTMENT: Portland State University
Computer Science Department
NASA/KSC
DIVISION: Shuttle Project Engineering Office
BRANCH: Process Integration Branch
NASA COLLEAGUE: Arthur E. Beller
DATE: August 21, 1992
CONTRACT NUMBER: University of Central Florida
NASA-NGT-60002 Supplement: 8
Acknowledgments
I would like to thank Dr. Loren Anderson and Ms. Kari Stiles of the University of Central Florida and Ms. Carol Valdes of the Kennedy Space Center for their efforts in making the NASA/ASEE Summer Faculty Fellowship Program an enjoyable and educational summer. I would also like to thank the many NASA, Boeing and Lockheed personnel who provided answers to a wide variety of questions, demonstrated software systems and provided hardware and software support. The list of names is too long to include here but without the expertise of all this work could not have been accomplished.
I would especially like to thank all the employees in the Shuttle Project Engineering Office for making me feel so welcome during the summer. Having a summer professor in this area was a first but hopefully, will not be the last.
Abstract
A good user interface is one that facilitates the user in carrying out his task. Such interfaces are difficult and costly to produce. The most important aspect in producing a good interface is the ability to communicate to the software designers what the user's task is. The Task Analysis Tool is a system for cooperative task analysis and specification of the user interface requirements. This tool is intended to serve as a guide to development of initial prototypes for user feedback.
Summary
The user interface is an extremely important part of software. Computer users today are not, in general, computer experts but experts in other domains who are dependent on computer software to facilitate their tasks. Developing interfaces for these users is an expensive and time consuming task. It is often difficult for the software developers to understand the user's domain well enough to come up with a usable interface. An iterative design process based on the concept of prototyping is becoming popular today. In this methodology a rapidly developed version of the software is used to obtain user feedback. This version lacks much of the eventual functionality and is used mainly to test out ideas the designers have about how the user interface should look. While the use of prototyping has proven to be valuable in the production of good interfaces, designers are still faced with the problem of developing initial prototypes and incorporating user feedback into the design of the interface.
This work presents a tool to be used in cooperative task analysis. End users and human-computer interaction personnel work together with the Task Analysis Tool to produce a task analysis and a rough sketch of an interface to support these tasks. The tool holds promise as a communication medium between end users and software designers. Better communication means fewer iterations in the interface design while still producing more usable interfaces.
# Table of Contents
I. Introduction .................................................................................................................
1.1 The Design of User Interfaces ............................................................................
1.2 Obstacles to Iterative Design ............................................................................
II. Task Analysis .............................................................................................................
2.1 Description of Task Analysis ............................................................................
2.2 Obstacles in Performing Task Analysis .............................................................
III. The Task Analysis Tool ..........................................................................................
3.1 Objective of the Task Analysis Tool .................................................................
3.2 Information Collection in the Task Analysis Tool ............................................
3.3 Status of the Task Analysis Tool ......................................................................
3.4 Description of the Task Analysis Tool .............................................................
IV. Example of the Use of TAT ....................................................................................
4.1 Description of the Example Task of Electronic Review and Approval ............
4.2 Example of the Process used to Sketch an Interface .......................................
V. Future Plans for Testing and Using TAT .................................................................
5.1 Uses for TAT Output .........................................................................................
5.2 Additions to TAT ...............................................................................................
5.3 Functionality Needed .......................................................................................
5.4 An Initial Test of TAT .....................................................................................
5.5 Testing .............................................................................................................
VI. Conclusions ...............................................................................................................
VII. References ...............................................................................................................
# List of Figures
<table>
<thead>
<tr>
<th>Figure</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Initial Display of TAT</td>
</tr>
<tr>
<td>2</td>
<td>Initial Display of TAT Fully Expanded</td>
</tr>
<tr>
<td>3</td>
<td>Information Collection Display of TAT</td>
</tr>
<tr>
<td>4</td>
<td>Template for Interface Sketch</td>
</tr>
<tr>
<td>5</td>
<td>TAT HelpScreen</td>
</tr>
<tr>
<td>6</td>
<td>Another TAT Help Screen</td>
</tr>
<tr>
<td>7</td>
<td>TAT Display for Information Types</td>
</tr>
<tr>
<td>8</td>
<td>TAT End Display</td>
</tr>
<tr>
<td>9</td>
<td>TAT Display for Example and Blank Template for Interface Sketch</td>
</tr>
<tr>
<td>10</td>
<td>Information Collection Display from TAT for Task "review"</td>
</tr>
<tr>
<td>11</td>
<td>Sketch of Display Generated for "review"</td>
</tr>
<tr>
<td>12</td>
<td>Sketch of Display Generated for "select"</td>
</tr>
<tr>
<td>13</td>
<td>Sketch of Display Generated for "approval"</td>
</tr>
<tr>
<td>14</td>
<td>Portion of Data Generated for Review and Approval Process</td>
</tr>
<tr>
<td>15</td>
<td>Viewpoint 2: Review and Approval from Engineering View</td>
</tr>
<tr>
<td>16</td>
<td>Viewpoint 2: Sketch of Interface for "create"</td>
</tr>
</tbody>
</table>
I. Introduction
1.1 The Design of User Interfaces
An important consideration in software development today is the interaction of the user with the software. This concern has emerged due to the changing nature of users of computer systems and the increasing complexity of current software systems. Today's users are not restricted to "computer hackers"; they are, in fact, using software systems merely as a tool to aid in different aspects of their jobs. Therefore, the amount of time users have to devote to learning and using the system is limited, as is the amount of frustration they will tolerate. To add to this problem software systems are becoming increasingly complex. This presents a problem for both users and developers. Users often have a difficult time in accessing all the desired functionality. As the interface is essentially the view that the user has of a system, he must be able to clearly see through this interface to the functionality of the software (Shackel, 1988). Instead many of the systems today present a bewildering array of choices for the user. Developers are also faced with maintaining and augmenting complex code. The end result is that dealing with the software either as a user or developer requires a large amount of time and hence is a costly effort.
In order to address these problems an iterative process of software development is stressed. The underlying principle is that changes to the software are easier and less costly to make early in the development cycle. Prototyping is one way of collecting information from the user about the usability of the system early in the software design process (Wilson and Rosenberg, 1988). The user's view of a given software system is determined largely by the interface to that system. That is, system functionality that is not readily accessible in the interface is nonexistent as far as the user is concerned. The software interface should provide a good match with the task that the user must perform with the software. A prototype of the interface is often used to collect users' reactions and feedback to such things as terminology and arrangement of menu items, format of information presented and sequence of movement. This information is then quickly incorporated into the prototype and more user feedback is collected.
Boker and Gronbaek (1991) have studied the use of cooperative prototyping. They contrast this approach to one where designers develop prototypes on their own using information supplied by users. They view cooperative prototyping as a way to overcome problems in developing applications that more closely match user tasks. Initial prototypes are used to make the views of the participants concrete. Prototypes can be refined or replaced as users and designers actively participate in the design process. HCI (human-computer interaction) personnel in this approach need to become familiar with the tasks of the users. Initial prototypes are set up by the designers based on their understanding of the user's tasks. The authors found that both well constructed prototypes which display sample user data and mock-ups which allowed more flexibility in interaction were helpful in obtaining feedback. This approach still relies on an iterative method with the designers having the responsibility for construction of the initial prototype.
1.2 Obstacles to Iterative Design
This iterative procedure results in an interface that the user is pleased with and in timely feedback to the developers. There are, however, several obstacles to an efficient use of such a procedure in the real world. In many instances, software is developed on a contractual basis. This means that the product is agreed upon prior to any design. This agreement usually takes the form of written requirements based mainly on the functionality which the software is to provide. Specifications for the user interface usually do not exist, or if they do, they are merely platform and style specifications. In addition, the requirements are usually generated at the management level. The management level on the developmental side agrees to these. The actual software developers and the actual end users may or may not have participated in this interaction. Therefore, the interface produced often differs drastically from what the users may have expected.
Changes in design are difficult to make in this type of environment. Developers are often removed from the end users both organizationally and physically. Time constraints often make it difficult for the users to schedule large blocks of time or a series of sessions to work with the developers. Therefore, there is little chance for iterative development. Even when iteration exists, the necessary changes may not be incorporated due to the contractual agreement.
Large product development organizations also contain obstacles to user involvement as documented by Grudin (1991). Product development organizations are companies that develop and sell interactive software applications. The development process is separated into two parts: events prior to the start of the project and events during development. Although the time line that separates these processes is difficult to define, budgets and personnel are allocated according to these distinctions. The high level product description used in the early stage generally does not include the user interface despite the fact that it is difficult to draw a line between functionality and the interface. User involvement and interface issues are, therefore, issues that are addressed during development.
Moving some of this involvement to the design phase is a goal of HCI personnel. In Grudin's study, rapid prototyping was found to be a useful tool in facilitating cooperative design. Moreover, the need to communicate information about computer use of the user directly to the developer was identified. Therefore, tools and methodologies that can be used to move user involvement to an earlier phase in the software development process are needed. Methods for developing and communicating user interface specifications to software developers are also greatly needed. The work presented here discusses a tool to accomplish this. This tool captures task analysis information directly from the end user and develops a rough initial prototype of the interface.
II. Task Analysis
2.1 Description of Task Analysis
Task analysis is a methodology for describing and analyzing performance demands made on the human element of a system. The goal of task analysis is a total human-machine system consisting of human performance requirements, hardware performance requirements and software performance requirements. Hardware and software requirements are much easier to obtain than are the human performance requirements and their interactions with the rest of the system.
The main objective of task analysis is to explore the relationships between the user's performance and the properties of the system. The focus is on designing a user interface to a system which is efficient and compatible with the view the user has of task performance. Design of dialogs in the interface is also a branch of task analysis. Maddix (1990) states that much dialog is based on an incomplete understanding of what kinds of interaction might take place between a typical user and the system. In doing task analysis the user's interaction with a given system is viewed with respect to the objects in the system and operations that the user performs on those objects. States in the system are changed by performing a sequence of operations on a series of objects. A goal can be described as a certain state within the system. This goal can be achieved by applying sequences of operations to objects in a given state. Guindon (1988) identifies these steps in task analysis:
1. Identify objects
2. Identify operations
3. Identify the sequence of operations used
The human constituents of a system are responsible for recognizing and interpreting states produced by the hardware and software systems. If these states have not produced the desired goals then it is necessary for the human to interact with the software to produce this state. Human error in carrying out these functions can not be completely eliminated but providing systems that are well matched to the users' tasks help in reducing the margin for error.
Don Norman (1986) identifies the gulf of execution and the gulf of evaluation in human-computer interaction. The gulf of execution results when the user is unable to correctly select the necessary sequence of operations to perform in order to produce the desired goal. The gulf of evaluation results from an incorrect interpretation or recognition of the state produced by a sequence of operations. The user bases this interpretation on the feedback produced by the system. The gulf is created by the difference between the user's view of what is happening and what is actually happening in the system. This distance is reduced as the user's view more closely matches the system view. Therefore, the interface and, consequently the dialog, between the user and the system must be the vehicle that maps the user's task into the functional components provided by the software.
A task analysis can be used to provide data about the user component of the system. The major problem then becomes how to map this task analysis data into an interface description that can be used to guide software developers in system design and
implementation. The data produced by a task analysis can take many forms depending upon the purpose for which that data was collected. In the case of this work, the concern is with the user interface so the task analysis will focus on interface items such as data displayed, format of the data, actions or operations on that data and the sequence in which the tasks are performed.
2.2 Obstacles in Performing Task Analysis
In order to produce a task analysis human-computer interaction personnel need to observe the user carrying out the task and to identify the objects, operations and sequences used. Additionally, in carrying out a task analysis for development of a software system, one must keep in mind that the current task will be changed by this automation. This means that the present task analysis must be examined to ascertain the effects that automation will have or that flexibility will have to be built into the system to accommodate future changes in task performance.
Many tasks involve a cognitive aspect. Users choose objects and operation in the system based on domain knowledge. In order to produce an effective interface it is necessary to understand these decisions. As the domains become increasingly complex this presents a larger obstacle to carrying out representative task analyses. Either the human-computer interaction person needs to learn the domain or the domain experts need to learn how to do task analysis.
In addition, domain experts often have difficulty in explicitly stating portions of their task. Portions of any expert's job become routine after a period of time and these routine cognitive tasks become difficult to verbalize. The human-computer interaction personnel is therefore responsible for recognizing missing portions and probing further to extract this knowledge from the expert. This puts additional demand on HCI personnel to understand the domain.
III. The Task Analysis Tool
3.1 Objective of the Task Analysis Tool
The main objective in the development of a tool to use in task analysis is to facilitate communication between the domain expert, HCI personnel, and the software designer. The following is a quote from Walsh, Lim and Long (1988):
"Human factors engineers complain that their contribution to iterative systems design is typically sought late, that is following system implementation. Software engineers, in contrast, complain that the human factors contributions to system design are neither timely, appropriate nor implementable."
The Task Analysis Tool (TAT) is designed to be used interactively by the domain expert under HCI supervision. Data collected during an interactive session will be analyzed by HCI personnel and given to the software designer to use as a guide to design of the interface. The data collected is saved in two forms: textual information that can be analyzed later for consistency issues within and between interfaces. Additionally, and more importantly for the user, a rough sketch of the interface is generated as information is entered. These screens can then be played back by the end user to help ensure that the displays give complete information in order to accomplish the given task. The Task Analysis Tool can serve as a useful tool to help the end user form a concrete description of his task. As visual feedback is provided immediately the user can match these results to his conceptual model. Corrections can be made to the interface sketch if the user finds that it is incorrect.
The fact that a rough sketch of the interface is produced serves to give a version to the user that is easier to survey for completeness than lists of functional requirements. The rough sketch can be used, in addition to functional requirements, to drive design. Having this sort of information at an early stage of design should mean that a better prototype can be initially developed. This serves in cutting down on the number of iterations that will be needed in obtaining user information. When given to the software developer this rough interface design serves to illustrate the control flow that the user follows. The task analysis tool is also a vehicle for agreement of expectations between users and developers.
3.2 Information Collection in the Task Analysis Tool
There are many definitions of tasks but a general agreement is that a task is composed of a set of human actions that contribute to some objective and ultimately to the output goal of a system. The content of a task can be more specifically defined once the objective of a task is identified.
Drury, Paramore, Van Cott, Grey and Corlett (1987) give the following characteristics useful in defining tasks:
431
"1. Task actions are related to each other not only by their objective but also by their occurrence in time. One of the concerns of task analysis is to establish and evaluate the time distribution of actions within and across tasks. Task actions include perceptions, discriminations, decisions, control actions, and communications. Every task involves some combination of these different types of cognitive and physical actions.
2. Each task has a starting point that can be identified as a stimulus or cue for task initiation. A cue is often not a single item of data or information. It may consist of several data points, received closely in time or dispersed over a longer time, which together have significance as a cue that an action is to be taken.
3. Each task has a stopping point that occurs when information or feedback is received that the objective of the task has been accomplished.
4. Task cues and feedback may be provided by instrumentation or direct sensory perception, or they may be generated administratively, say, by a supervisor or co-worker.
5. A task is usually, but not necessarily, defined as a unit of action performed by one individual."
The Task Analysis tool captures much of the information deemed characteristic of tasks. Some of this information is included in the sketched interface while other portions of it are included in the data file produced.
Tasks are of three types: discrete or procedural, continuous or tracking, and branching. Discrete tasks require that a series of actions be executed in response to a stimulus or procedure element. A continuous task extends over a long period of time, often cycling through a series of actions. A branching task is determined by the outcome of a certain action within the task.
The prototyped version of the Task Analysis Tool is most useful for discrete or procedural task and continuous tasks. Branching tasks cannot currently be handled but the addition of multiple path links will support this. The example presented in this paper contains an instance of a branching task. Therefore, a link that currently exists in the example might, in reality, not appear.
3.3 Status of the Task Analysis Tool
The Task Analysis Tool (TAT) currently exists in a prototyped version. While there are many features that have already been identified for addition into the system, this prototype should illustrate the usefulness of such a tool. Features suggested for inclusion in a developed version are discussed in section V.
TAT is designed to operate as two side by side displays. The screens that collect information constitute one set of displays. These are presented beside of the interface being sketched. The prototype of TAT was created using Toolbook by Asymetrix (1989) under Windows 3.1.
The figures that are included in this paper were printed using the print facilities of Toolbook. Unfortunately, menu bars do not print out. Therefore these have been separately constructed and included in figures where needed to illustrate the functionality of TAT. The operable version of TAT, therefore, looks slightly different than the version depicted here. In addition, Toolbook does not include the capability to print out dialog boxes. The example indicates which buttons are dialog boxes and describes the choices that are presented. In addition, the sizes of the displays have been adjusted slightly in the printed version in order to accommodate difference in the type size displayed on the screen and the printed type size.
A data file produced from a session with TAT contains information about the task, actions, information displayed and sequencing. Ideally this data could be examined to help in designing interfaces that will accommodate several viewpoints. Moreover, if several applications are to be used concurrently, examination of the data could be used to suggest commodities that should be considered in designing a common user interface. The current file produced is a labeled ASCII file. An example of this data file is included in Figures 14 and 20 and discussed more in section IV.
3.4 Description of the Task Analysis Tool
The Task Analysis Tool asks the user to identify the different tasks used to carry out a given process. These tasks are input as menu items in the constructed interface. A display is created for each discrete task. Figure 1 shows the initial display that is used to collect information about the names and numbers of tasks that makeup the process. Currently, the prototyped version of TAT allows only eight tasks per process. Figure 2 shows the complete set of forms that would be displayed if the user had indicated that eight tasks constituted this particular process. The user is asked to enter the tasks in some sort of meaningful order - either sequentially or in order of frequency of use. This is due to the order in which the items are entered into the menu of the sketched interface.
For each task, a display is presented (see Figure 3) that collects the information to be viewed and the type and format of the information. A task allows for two primary information sources, two secondary information sources and four status indicators. Each task can have up to six actions that are carried out on the information displayed. These actions are presented as sub menu items under the task menu item. The information that the user specifies is to be on the display is roughed out and presented according to the importance (primary, secondary or status) that the user assigns to the information.
Control flow in the system is represented by users specifying the tasks that precede and succeed the current task. These tasks are presented to the user in a dialog box that is constructed using the task names initially input on the first display in TAT. In the interface begin generated the displays for these tasks will be linked in the corresponding fashion so that the user can later play this back to assess if the flow accommodates the process correctly.
Figure 4 contains a template on which each interface display is sketched. For each new task in the process, this page is filled in with information collected from the user.
Figures 5, 6, and 7 are examples of help screens that are provided with TAT. The prototyped version contains no error handling capabilities. Although error detection and more help information will be included in the actual implementation of the software,
more help information will be included in the actual implementation of the software, TAT is intended to be used under the guidance of HCI personnel.
Figure 8 shows the screen that TAT displays when the user has finished typing in the information. At this point in time, the user can exit the system or can run the sketched interface to determine its correctness. If the user chooses to exit at this point, the interface will be saved and can be run later. The name of the data file selected by the user will also be displayed on this screen.
Figure 1: Initial Display of TAT
Figure 2: Initial Display of TAT Fully Expanded
Enter information needed on
- format of information
- about info
- importance of info
- more info
- Enter actions one at a time
- more actions
Enter the tasks that precede and succeed this one in a normal sequence
- previous task
- next task
- format next disp
- done
Figure 3: Information Collection Display of TAT
Figure 4: Template for Interface Sketch
Figure 5: TAT Help Screen
Entering items here will add items to the menu for the process whose interface is being sketched.
Figure 6: Another TAT Help Screen
A process is a collection of tasks that when executed satisfy a particular end goal.
NOTICE
You may have 2 primary info sources, 2 secondary and 4 status. If you have more, decompose your task.
current
primary
secondary
status
All data for the process has been collected. The data will be saved in a file under the name:
In addition, a rough version of the interface has been sketched. This information will also be saved as info.tbk
exit run application
Figure 7: TAT Display for Information Types
Figure 8: TAT End Display
IV. Example of the Use of TAT
4.1 Description of the Example Task of Electronic Review and Approval
The example presented here is an interface sketch for doing electronic review and approval. The following sections discuss the electronic review and approval process, the example, and an informal collection session using an early paper version of TAT.
In order to make changes to processes in the Shuttle Processing Environment at KSC, change requests must be generated and approved by the systems that are affected. Changes are done at various times before and during any given flow and range in size from large volumes of documentation to changes to a single operation or a change in sequence. A revision to a document is generated by selecting the portions of a master document that are to be used in this particular shuttle flow. Revisions include any changes that were generated previously to operations included in this flow. After a revision has been produced, changes that are made are termed deviations. A deviation may be a change in sequencing or a change to an individual step or steps. Deviations may be temporary. That is, the change is made only for this flow. A permanent deviation means that the change should be incorporated into this operation for all succeeding revisions. Currently these changes (revisions and deviations) are generated by engineering personnel and distributed to NASA personnel and other engineering teams for review and approval.
The review and approval process consists of suggesting changes to the text if necessary, making comments as appropriate or approving the change. During this process a reviewer may wish to see the comments or changes that other reviewers have generated. This procedure is an iterative one as comments and changes may need to be incorporated into the change and the change again distributed to the reviewers.
A computer based version could speed up the process. Reviewers could be notified electronically that a change was ready for review. The individual or group who initiated the change would be able to distribute it to the reviewers without having to either mail or hand deliver hard copies to the various individuals. Comment and changes made by the reviewers would be sent back electronically and could be directly incorporated into the change description. Reviewers would be able to quickly view other comments and the status of the change could be tracked electronically.
4.2 Example of the Process used to Sketch an Interface
The TAT example presented here uses the electronic review and approval process. Two interfaces are sketched here. These interfaces are from two different viewpoints: from the view of an engineering generating the change and from the point of view of a NASA reviewer. The software interface generated must support both views. Soliciting information from both viewpoints will yield data on commonalities that exist and the different emphasis that exists for the different views.
The first viewpoint presented is that of the reviewer. Figure 9 contains the initial information collection display. The name of the process input by the user is used to name the data file that will contain the information collected. The three tasks in the review and approval process are: select, review and approval. As these tasks are
entered by the user, these names appear in the menu bar of the sketched display. This is illustrated in the menu bar of the template for the interface sketch. In addition, a blank display is created on which the interface for each will be sketched. The viewpoint button is a dialog box which queries the user as to his position. In this case, the choices are: engineer, NASA, quality, NASA Test Director, documentation, and other.
Figure 10 contains a portion of the information collected for the task review. Figure 11 contains the interface generated by TAT that corresponds to the data entered in Figure 10. The first field on the display in figure 10 collects the name of the various sources of information needed on the display. The format of information button is a dialog button that queries the user as to the way information is presented. The choices currently display in this dialog box are graphical, text, labeled data, tabular data and schematic data. The about info and importance of info buttons are a pair. The importance of info is a dialog box that asks whether the user considers a particular piece of information to be of primary or secondary to the task at hand. A third option would be to view the information as status only. TAT contains parameters which limit the number of primary, secondary and status pieces of information that can be concurrently displayed. The limitations in place now are rather arbitrary. In any given domain better limitations could be selected depending on display size, screen resolution and frequency and duration of use. The about info button is linked to a display (see Figure 7) that keeps track of the number of different information types presently selected for a given display. When the importance of information choice has been made and the OK button pressed, a labeled box will be drawn on the interface sketched. The size of the box will differ depending on whether the importance is secondary or primary. The box is also labeled as to how the data will be presented. Status only information appears as a button. If the user wished to enter more information sources, pressing the more info button clears the text field and focuses the cursor there.
When all the information sources have been entered, the user is prompted to enter actions that will be performed. These actions are entered as sub menu items. In order to see these, the user must highlight the task menu item in the sketched interface. The previous task and the next task buttons are dialog buttons that present the user with the list of tasks he has identified as being in this process. In the case of previous task, the list is augmented with start and in the case of next task the list will also contain end. Selecting a choice from these dialog boxes will result in a previous task and next task button being drawn on the interface sketch and in those buttons being linked to the correct display. This facilitates running the application. By pressing the previous or next task button, the user can simulate stepping through the process.
Pressing the format next display button brings up a blank information collection screen for the next task (in the order originally entered by the user on the first display) and a blank template for the interface sketch. After the information has been filled in for all the tasks in the process, the users can select the done button. Several things will then happen. First, information collected will be written to the specified data file. Then the user is asked whether he wishes to run the application just created. If he chooses not to, he can always retrieve this later from the "info.tbk" file and execute it.
Figures 12 and 13 are additional displays generated for the tasks of selecting items to review and approving or rejecting the changes. Figure 14 contains a portion of
the data file that was generated during this session. It contains information about the tasks, the information sources, the actions and the sequence of flow.
Figure 15 shows the initial display that was used to collect the tasks from the engineer's viewpoint. Notice that a task labeled create now exists and is used to initiate the change. In addition a release task exists where notification about an approved change and its incorporate into the master file is accomplished. Figure 16 is the display generated for the task of creating a change.
Figure 9: TAT Display for Example and Blank Template for Interface Sketch
Figure 10: Information Collection Display from TAT for Task "review"
Figure 11: Sketch of Display Generated for "review"
Figure 12: Sketch of Display Generated for "select"
Figure 13: Sketch of Display Generated for "approval"
The process being described is: el-rv-ap
this process if from the viewpoint of: NASA
task1 in this process is: review
Task2 is: approval
Task3 is: select
tthe current task is: review
this information is needed: documents
the information is to be presented as: text
the importance of this information is primary
the current task is: review
this information is needed: comments
the information is to be presented as: text
the importance of this information is primary
the current task is: review
this information is needed: distribution list
the information is to be presented as: text
the importance of this information is secondary
the current task is: review
this information is needed: list of files
the information is to be presented as: text
the importance of this information is secondary
the current task is: review
this information is needed: status of change
the information is to be presented as: text
the importance of this information is status only
the following actions are performed on this info: redline
the following actions are performed on this info: comment
the following actions are performed on this info: distribute
the following actions are performed on this info: display
the following actions are performed on this info: save
the following actions are performed on this info: compare
the following actions are performed on this info: print
the previous task is: select
the task that follows this one is: approval
the current task is: approval
this information is needed: document
the information is to be presented as: text
the importance of this information is primary
the current task is: approval
this information is needed: comments
the information is to be presented as: text
the importance of this information is primary
the current task is: approval
this information is needed: distribution list
the information is to be presented as: text
the importance of this information is secondary
Figure 14: Portion of Data Generated for Review and Approval Process
Figure 15: Viewpoint 2: Review and Approval from Engineering View
Figure 16: Viewpoint 2: Sketch of Interface for "create"
V. Future Plans for Testing and Using TAT
5.1 Uses for TAT Output
As was previously stated, the TAT output is meant to serve several purposes. First of all, the sketched interface serves to give a more concrete aspect to the task analysis in a form that is easily understood by the user. Using this sketch, the user should be able to assess it for completeness and correctness. The interface could be used in a representative scenario of the process which the user could work through. This sketch should accompany functional requirements given to the developers to facilitate design of the user interface.
Analysis programs could be written to scan the data files generated. This would be particularly useful in the case where several viewpoints are being examined or where several applications are to be run concurrently. The data files can be examined to see conflicts and commonalities in information sources and presentation methods. In particular, common tasks or similar tasks should possess similar actions. Consistency in interface design has been recognized as beneficial to success of software companies (Tognazzini, 1989). Consistency in presentation and actions can be analyzed using the TAT data files.
5.2 Additions to TAT
The prototyped version of TAT is a very rough version. There is much work yet to be done on determining what kinds of information should be collected. Information about feedback desired from a given action seems a likely candidate as does information on the frequency and duration of the task. In order to determine the completeness of information collecting in TAT it will be necessary to try it out in many different domains.
5.3 Functionality Needed
There are many functions that need to be included in a coded version of TAT. The functionality of the current version is limited due to the nature of the prototyping tool used to implement it and the time limitations during which TAT was constructed. Functionality that is seen as needed includes:
1. The ability to display labels on status buttons and task link buttons in the interface sketch.
2. The ability to link up tasks in multiple paths.
3. The ability to save and display sub menu items in the interface sketch. Currently this information is saved in the data file but once the interface sketch is closed, they do not appear in the saved sketch.
4. There should be a way to associate actions with a particular piece of information. This type of knowledge could be useful if deciding to break the task into several displays in the final design of the interface.
5. The user should be able to easily change the choices displayed in the dialog boxes on viewpoint and information type. These choices are
dependent on the domain in which TAT is being used. In addition the user should be able to easily change the parameters concerning the number of tasks and information sources.
5.4 An Initial Test of TAT
The information used in generating this example was produced mainly from informal interviews with personnel involved in Shuttle Flow Processing. This was due mainly to the limited time frame for development of the prototype. However, a paper version was used in one instance to obtain information about the review and approval process. Several observations were made during this process. First, a new step in the review process, that of comparing initial changes and comments to the newly distributed change, was identified. Perhaps this step would have eventually been discovered through further interviews but having to simplify one's thoughts about the task and flow seemed to clarify the process.
The ability to be able to distribute the change to a person other than the originator was identified as was the capability of seeing which jobs were currently being worked when reviewing changes. While TAT does not currently capture all this information it is rewarding that using this approach elicited this information. This suggests that using TAT along with note taking or audio/video recordings would be a beneficial approach.
5.5 Testing
In order to determine how useful TAT is, it must be used in the development of several prototypes and these compared to the prototypes developed without this tool. In addition, it needs to be determined what kinds of analysis should be performed on the data files and what, if any, other information should be collected that will be useful. It is expected that TAT will evolve as it is used in more varied domains. Testing the benefits of using TAT will be a difficult task. In the best scenario software would be developed with and without using TAT. Performing these kinds of parallel developmental tests in the real work are difficult if not impossible. Therefore, the most realistic situation would be to use it in as many varied situations as possible and use feedback from the users, developers and HCI personnel to determine the benefits.
VI. Conclusions
Development of good interfaces in software means the ability to closely map the user's task to interface elements. This depends on producing a good task analysis and upon an iterative design process. Unfortunately there are obstacles to being able to accomplish both of these. Producing a good task analysis is especially difficult in cases where the domain is complex and in which much user training is needed. The person conducting the task analysis is often given information from the user with no way of assessing its completeness or its relative importance. Moreover, being able to translate this information into an initial prototype is difficult. This is especially the case in situations where no system is currently in place.
In addition it is important to have the ability to communicate to the developer the user's expectations of an interface as early as possible in the design cycle. This helps to reduce the iterative design process and hence lessen efforts and costs.
The Task Analysis Tool is a step in the proper direction. Although simplistic in nature, it serves to obtain feedback from the end user at an early point in the design cycle. This feedback can be easily communicated to software designers to use as a basis for initial prototypes and interface designs. Further refinements of the Task Analysis Tool will be done. Then its benefits in facilitating interface development will be assessed.
VII. References
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930010215.pdf", "len_cl100k_base": 8714, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 68696, "total-output-tokens": 10346, "length": "2e13", "weborganizer": {"__label__adult": 0.0005121231079101562, "__label__art_design": 0.004497528076171875, "__label__crime_law": 0.0002913475036621094, "__label__education_jobs": 0.0148773193359375, "__label__entertainment": 0.00018453598022460935, "__label__fashion_beauty": 0.00032711029052734375, "__label__finance_business": 0.0004549026489257813, "__label__food_dining": 0.0004792213439941406, "__label__games": 0.00121307373046875, "__label__hardware": 0.0018100738525390625, "__label__health": 0.0004992485046386719, "__label__history": 0.0006823539733886719, "__label__home_hobbies": 0.00017774105072021484, "__label__industrial": 0.0005841255187988281, "__label__literature": 0.0006680488586425781, "__label__politics": 0.00025010108947753906, "__label__religion": 0.0006508827209472656, "__label__science_tech": 0.04345703125, "__label__social_life": 0.0001367330551147461, "__label__software": 0.01558685302734375, "__label__software_dev": 0.9111328125, "__label__sports_fitness": 0.00033974647521972656, "__label__transportation": 0.0007491111755371094, "__label__travel": 0.0002493858337402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47347, 0.01976]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47347, 0.62866]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47347, 0.93088]], "google_gemma-3-12b-it_contains_pii": [[0, 450, false], [450, 1280, null], [1280, 1778, null], [1778, 3240, null], [3240, 5783, null], [5783, 6672, null], [6672, 10021, null], [10021, 13024, null], [13024, 16184, null], [16184, 18093, null], [18093, 20871, null], [20871, 23650, null], [23650, 27298, null], [27298, 27841, null], [27841, 27923, null], [27923, 28284, null], [28284, 28530, null], [28530, 28977, null], [28977, 32304, null], [32304, 36156, null], [36156, 36704, null], [36704, 36778, null], [36778, 36900, null], [36900, 37007, null], [37007, 38997, null], [38997, 39121, null], [39121, 41829, null], [41829, 44028, null], [44028, 45466, null], [45466, 47347, null]], "google_gemma-3-12b-it_is_public_document": [[0, 450, true], [450, 1280, null], [1280, 1778, null], [1778, 3240, null], [3240, 5783, null], [5783, 6672, null], [6672, 10021, null], [10021, 13024, null], [13024, 16184, null], [16184, 18093, null], [18093, 20871, null], [20871, 23650, null], [23650, 27298, null], [27298, 27841, null], [27841, 27923, null], [27923, 28284, null], [28284, 28530, null], [28530, 28977, null], [28977, 32304, null], [32304, 36156, null], [36156, 36704, null], [36704, 36778, null], [36778, 36900, null], [36900, 37007, null], [37007, 38997, null], [38997, 39121, null], [39121, 41829, null], [41829, 44028, null], [44028, 45466, null], [45466, 47347, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47347, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47347, null]], "pdf_page_numbers": [[0, 450, 1], [450, 1280, 2], [1280, 1778, 3], [1778, 3240, 4], [3240, 5783, 5], [5783, 6672, 6], [6672, 10021, 7], [10021, 13024, 8], [13024, 16184, 9], [16184, 18093, 10], [18093, 20871, 11], [20871, 23650, 12], [23650, 27298, 13], [27298, 27841, 14], [27841, 27923, 15], [27923, 28284, 16], [28284, 28530, 17], [28530, 28977, 18], [28977, 32304, 19], [32304, 36156, 20], [36156, 36704, 21], [36704, 36778, 22], [36778, 36900, 23], [36900, 37007, 24], [37007, 38997, 25], [38997, 39121, 26], [39121, 41829, 27], [41829, 44028, 28], [44028, 45466, 29], [45466, 47347, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47347, 0.07087]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
21fdf28cd6f1997de6bf23955b69b5e0f2a1af7f
|
Identification and Remediation of Self-Admitted Technical Debt in Issue Trackers
Yikun Li, Mohamed Soliman, Paris Avgeriou
Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence
University of Groningen
Groningen, The Netherlands
{yikun.li, m.a.m.soliman, p.avgeriou}@rug.nl
Abstract—Technical debt refers to taking shortcuts to achieve short-term goals, which might negatively influence software maintenance in the long-term. There is increasing attention on technical debt that is admitted by developers in source code comments (termed as self-admitted technical debt or SATD). But SATD in issue trackers is relatively unexplored. We performed a case study, where we manually examined 500 issues from two open source projects (i.e. Hadoop and Camel), which contained 152 SATD items. We found that: 1) eight types of technical debt are identified in issues, namely architecture, build, code, defect, design, documentation, requirement, and test debt; 2) developers identify technical debt in issues in three different points in time, and a small part is identified by its creators; 3) the majority of technical debt is paid off, 4) mostly by those who identified it or created it; 5) the median time and average time to repay technical debt are 25.0 and 872.3 hours respectively.
Index Terms—mining software repositories, self-admitted technical debt, technical debt introduction, technical debt repayment, issue tracking system
I. INTRODUCTION
Technical debt (TD) refers to taking shortcuts, either deliberately or inadvertently, to achieve short-term goals, which might negatively influence the maintenance and evolution of software in the long term [1]. Technical debt can be incurred in activities throughout the whole development life cycle, from requirements, to design, implementation, testing, etc. There have been several approaches supporting the identification of technical debt in almost all of these activities [2]. For example, there are approaches detecting code debt by analyzing source code [2], and test debt by analyzing test reports [2].
A part of technical debt is declared as such by the developers themselves; for example when developers state in source code comments, that something is not right and should be fixed. This has been termed “Self-Admitted Technical Debt” (SATD) [3]. SATD is often complementary to other types of technical debt items, as it provides information that cannot be uncovered through other means of technical debt identification. For example deciding to use a sub-optimal library is likely to be captured in a source code comment but it cannot be detected from source code. Maldonado and Shihab [3] detected five types of SATD (i.e. requirement, code, design, defect, and documentation debt) from source code comments.
While current work on SATD has focused on source code comments, there are other potentially rich sources of information containing SATD. In this paper we focus on SATD in issue trackers, as developers often discuss about technical debt when working on issues. There has been some research work exploring technical debt in issue tracking systems [4], [5], showing the possibility of detecting TD through issue trackers, and analyzing the characteristics of technical debt issues, such as opening time, and number of watchers. However, SATD in issue tracking systems is still relatively unexplored.
The main goal of this paper is to analyze the types of SATD in issue tracking systems, and to determine how software engineers identify and resolve them. To achieve our goal, we conducted a case study where we performed a qualitative analysis on a sample of 500 issues. Specifically, we identified and analyzed sentences in issues that refer to SATD. Our findings indicate that: 1) eight types of technical debt are found in issues, namely architecture, build, code, defect, design, documentation, requirement, and test debt; 2) there are three distinct cases of identifying technical debt in issue trackers, while only a small part (13.1%) of technical debt is identified by its creators; 3) the majority of technical debt is paid off, mostly by those who identified or created it (47.7% and 44.0% respectively); 4) the median time and average time spent on technical debt repayment are 25.0 and 872.3 hours.
Our findings provide a number of implications to practitioners and researchers, including: 1) using issue trackers as complementary sources to source code comments for debt detection; 2) developing approaches to detect technical debt, depending on the time that the debt is identified; 3) reporting urgent technical debt in issue trackers, rather than in source code comments, for quicker repayment.
The remainder of this paper is organized as follows. In Section II, related work is discussed. Section III presents a typical issue life cycle, accompanied with an example. The case study design is then elaborated in Section IV, while the results are presented and discussed in Section V and Section VI respectively. Finally, threats to validity are evaluated in Section VII and conclusions are drawn in Section VIII.
II. RELATED WORK
In this study, we investigate technical debt in issue trackers, which is a type of SATD. Thus, we organize the related work...
into two parts: work related to SATD in general and work related to technical debt in issue trackers.
**Self-admitted Technical debt:** Potdar and Shihab [6] studied self-admitted technical debt in source code comments within four open source projects. They found that a range of 2.4% to 31.0% of source files contain SATD and 26.3% to 63.5% of debt is eventually removed. In a follow-up study, Maldonado and Shihab [3] studied five open source projects and discovered the following five types of SATD: design, defect, documentation, requirement, and test debt.
There has also been work related to paying back SATD. Maldonado et al. [7] analyzed five Apache projects to study the removal of SATD. They found that most of SATD is removed by the same person that introduced it, and on median, it takes 18 to 172 days to remove SATD comments. Zampetti et al. [8] also analyzed the removal of SATD in five Java open source projects. The findings showed that 20% to 50% of SATD is removed unintentionally, and 8% of debt removal is recorded in commit messages. Our work differs from the work described above, as we look into SATD within issue trackers, instead of source code comments.
**Technical debt in issue trackers:** To the best of our knowledge, only two studies have focused on the detection and comprehension of technical debt in issue trackers. The first, by Bellomo et al. [4] presents a classification method for technical debt issues. They manually examined 1,264 issues in four issue trackers from two government projects and two open source industry projects. From this set, they classified 109 issues as technical debt issues and derived generic characteristics for these issues. The second study, by Dai and Kruchten [5] analyzed issues from a commercial software issue tracker by reading issue summaries and descriptions. From 8,149 analyzed issues, they classified 331 as TD issues, and categorized them into six types - defect, requirement, design, code, UI, and architecture debt. Subsequently, by using machine learning techniques, they trained a classifier with the analyzed issues to automatically classify TD issues.
Our study also classifies issues into types of technical debt (RQ1). But it differs, as it also focuses on how technical debt items are identified (RQ2), and how technical debt items are repaid by developers (RQ3). Moreover, we analyze issues on the sentence level by reading each sentence in the issue summary, description, and comments. If a sentence or a group of sentences indicates technical debt, we tag it as a technical debt statement. This is different from the aforementioned related studies [4], [5] as they both classified whole issues as technical debt issues or non-technical debt issues. Treating a whole issue as a single type of technical debt may be inaccurate, because software engineers might discuss several types of technical debt in the same issue. For example, in issue HADOOP-6730\(^1\), software engineers discuss both code debt and test debt.
### III. Background - Issue Life Cycle
In general, an issue tracker is a system for issue management. A managed issue is not only limited to defects but also new features or refactoring. An issue has its own life cycle, from the time it is created until the time it is resolved. The typical steps of this life cycle and an example of each step are shown in Table I.
#### TABLE I
<table>
<thead>
<tr>
<th>No.</th>
<th>Step</th>
<th>Description</th>
<th>Example (Hadoop-11074)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Create Issue</td>
<td>Usually, software developers create an issue when they find bugs or have new requirements. They first create an issue, which is assigned a unique issue key and describe that issue in detail.</td>
<td>“Now that hadoop-aws has been created, we should actually move the relevant code into that module, similar to what was done with hadoop-openstack, etc.” (unique key is Hadoop-11074)</td>
</tr>
<tr>
<td>2</td>
<td>Discuss and Create Patch</td>
<td>At a later stage, developers start working on it: they comment inside the issue analyzing the problem and sharing their ideas about the solution, and then create a patch to address the issue.</td>
<td>“HADOOP-11074 patch is attached. This patch does the following: Move the s3 and s3nativ e F3 connector code from hadoop-common to hadoop-aws...”</td>
</tr>
<tr>
<td>3</td>
<td>Code Review</td>
<td>The proposed patch is reviewed by other developers, and feedback is given. If no problem is found in the patch, they proceed to step No. 6, otherwise to the next step.</td>
<td>“Can you add an @Ignore on the tests which are failing, so that we can have a green upstream build? +1 once that’s addressed.”</td>
</tr>
<tr>
<td>4</td>
<td>Update Patch</td>
<td>According to the code review feedback, developers refine the patch and submit it again for another round of code review.</td>
<td>“HADOOP-11074 patch 2 is attached. Change the original patch to... This should get Jenkins passing.”</td>
</tr>
<tr>
<td>5</td>
<td>Code Review</td>
<td>The code is reviewed once more. If it passes, developers proceed to the next step; otherwise, they go back to step No. 4.</td>
<td>“+1, will commit in an hour or two if there are no more comments.”</td>
</tr>
<tr>
<td>6</td>
<td>Final Code Commit</td>
<td>The approved patch is committed to the repository with the issue key included in the commit message, and then the issue status is changed to Resolved.</td>
<td>“Patch is committed. Commit message: HADOOP-11074. Move s3-related F3 connector code to hadoop-aws.”</td>
</tr>
</tbody>
</table>
#### IV. Case Study Design
The goal of this study, formulated according to the Goal-Question-Metric [9] template is to “analyze issues in issue tracking systems for the purpose of characterizing the technical debt within the issues with respect to the types, the introduction, and the repayment of technical debt from the point of view of software developers in the context of open source software”. This goal is refined into three research questions (RQs):
- **(RQ1)** What types of technical debt are reported in issues? Having knowledge of the types of technical debt could help us understand the strengths and limitations of detecting technical debt in issue trackers. For example, we may find that a specific type of technical debt is only detected in issues and not in other sources, or that it is mostly detected in issues. That can help in proposing
---
1\[https://jira.apache.org/jira/browse/HADOOP-6730\]
2\[https://jira.apache.org/jira/browse/HADOOP-11074\]
approaches for detecting technical debt that combine different sources. Although Dai and Kruchten [5] also studied types of debt in issues, they only analyzed the issue summary and description. In contrast, we analyze entire issues (including the comments) at the level of sentences.
- **(RQ2) When do software developers identify technical debt in issues?** This RQ aims at understanding the point in time that debt is identified in issue trackers. For example, technical debt can be incurred when working on an issue, or it can exist beforehand and the issue is created to address it. This can help researchers to tune their TD detection approaches depending on when it is identified. For example, if the technical debt is added to a patch and eventually the patch is rejected (not committed), the debt is not added to the system. In this case, an approach may falsely detect this debt item in a code review statement regarding that (rejected) patch.
- **(RQ3) How do software engineers resolve technical debt in issues?** This is further refined into 3 sub-questions:
- **(RQ3.1) How much technical debt is resolved?** Quantifying how much technical debt is paid off, helps us understand developers’ attitudes towards technical debt and of course the magnitude of the problem. For instance, if most of the debt is discussed and resolved, it would imply that developers are aware of the harmfulness of technical debt and take action resolving it. It would also imply that technical debt in issues does not pose a critical threat.
- **(RQ3.2) Who resolves technical debt?** Technical debt can be resolved by those who created it, those who discovered it, or by others. This aids in understanding the practices of developers, e.g. if those that incur debt take the responsibility to resolve it. It can also be used to assist with debt repayment; for example if the debt creator did not resolve it, another developer may need more documentation to understand the problem well enough in order to solve it.
- **(RQ3.3) How long does it take to resolve technical debt?** Knowing how long it normally takes to repay technical debt after discovering it, is helpful for technical debt management. Technical debt that is long-lived causes extra maintenance effort and should thus be prioritized for remediation.
Fig. 1 shows the approach we follow to answer the research questions. The four individual processes (automated and manual) are explained in the following sub-sections.
A. Data collection
To answer the research questions, we looked into Apache Java projects since they are of high quality and supported by mature communities. To select Apache projects pertinent to our study goal, we set the following criteria:
1) Both the issue tracking project and the source code repository are publicly available and well-maintained.
2) They have at least 1,000,000 source lines of code (SLOC) and 10,000 issues in the issue tracker. This is to ensure sufficient complexity.
3) Source code commits involve their associated issue keys within their comments. This is important to support linking commits (in the source code repository) with issues (in the issue tracker). This is further motivated in Section IV-C.
4) They are commonly used in other SATD studies (e.g. [7]). This allows us to compare the results between our study and other SATD studies.
Based on these criteria, we selected Hadoop\(^3\) and Camel\(^4\). Both projects were studied for SATD [7], were developed in Java, used Git as a source code repository and JIRA\(^5\) as an issue tracker. We analyzed the latest released versions on Jan 16, 2020. Table II shows some details for the two projects. The number of Java files and SLOC are calculated using the LOC tool\(^6\). The number of contributors is obtained from GitHub. We used the JIRA Python package to extract all Hadoop and Camel issues from the online server and stored them in a local database; then we counted the number of issues.
<table>
<thead>
<tr>
<th>Project</th>
<th># Java files</th>
<th>SLOC</th>
<th># Contributors</th>
<th># Issues</th>
<th># Filtered issues</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hadoop</td>
<td>10,918</td>
<td>1,700,501</td>
<td>259</td>
<td>16,808</td>
<td>6,685</td>
</tr>
<tr>
<td>Camel</td>
<td>17,585</td>
<td>1,196,790</td>
<td>583</td>
<td>14,411</td>
<td>12,259</td>
</tr>
</tbody>
</table>
B. Filtering issues
To ensure that we study issues with a complete life cycle (as shown in Table I), we applied two filtering criteria:
1) **Issue status:** Since we are aiming at studying technical debt items that were resolved, we focus on issues that
\(^3\)https://hadoop.apache.org
\(^4\)https://camel.apache.org
\(^5\)https://jira.apache.org
\(^6\)https://github.com/cgag/loc
are done or closed. Thus, we removed all issues with status Open or Pending Closed.
2) Availability of issue key in commits: Although some issues have their status set to Resolved and developers commented that the patches are successfully committed to the repositories, we cannot find the related commits in Git. This is mostly because developers did not include the issue key in the corresponding commit messages. We also exclude these issues, since we need the commit information to be able to answer RQ3 on debt repayment.
The final number of issues after filtering is listed in the rightmost column of Table II.
C. Linking issues with commits
In order to determine how software engineers actually resolve technical debt (i.e. answering RQ3), we have to capture the code commits associated with an issue. This information is needed to determine the software developers responsible for repaying technical debt (RQ3.2) and the time for this repayment (RQ3.3).
Since in the previous step, we ensured that the commit messages contain the related issue keys, we use those keys to link issues with commits. In practice, we first output the Git commit log, and match the issue key by applying a regular expression to the commit log. Then all matched commits (including commit date, commit message, and commit author) are inserted into the issue holding the issue key ordered by time, and then the issue with commit information is stored in a local database.
D. Issue manual analysis
The filtering step resulted in 18944 issues that fulfill our criteria (see Section IV-B): 6685 for Hadoop and 12259 for Camel. Since manually analyzing issues is extremely time-consuming, we are only able to analyze a subset. From this set, we randomly selected a sample of 500 issues for analysis: 250 issues from each project (i.e. Hadoop and Camel). The size of our sample is in line with similar studies, e.g. Zaman et al. analyzed 400 issues to study performance bugs [10]. To analyze issues for technical debt, we followed the instructions for qualitative analysis proposed by Runeson et al. [11]. We used a professional qualitative content analysis tool (ATLAS.ti)\(^7\) to annotate relevant sentences within the sample issues.
To answer RQ1, we performed a classification using an existing framework from Alves et al. [12]. This framework
---
### Table III
<table>
<thead>
<tr>
<th>Type</th>
<th>Indicator</th>
<th>Reused</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>Architecture debt</td>
<td>Violation of modularity</td>
<td>⬤</td>
<td>Because shortcuts were taken, multiple modules became inter-dependent, while they should be independent.</td>
</tr>
<tr>
<td></td>
<td>Using obsolete technology</td>
<td>○</td>
<td>Architecturally-significant technology has become obsolete.</td>
</tr>
<tr>
<td>Build debt</td>
<td>Under- or over-declared dependencies</td>
<td>⬤</td>
<td>Under-declared dependencies: dependencies in upstream libraries are not declared and rely on dependencies in lower level libraries. Over-declared dependencies: unneeded dependencies are declared.</td>
</tr>
<tr>
<td></td>
<td>Poor deployment practice</td>
<td>○</td>
<td>The quality of deployment is low that compile flags or build targets are not well organized.</td>
</tr>
<tr>
<td>Code debt</td>
<td>Complex code</td>
<td>○</td>
<td>Code has accidental complexity and requires extra refactoring action to reduce this complexity.</td>
</tr>
<tr>
<td></td>
<td>Dead code</td>
<td>○</td>
<td>Code is no longer used and needs to be removed.</td>
</tr>
<tr>
<td></td>
<td>Duplicated code</td>
<td>⬤</td>
<td>Code that occurs more than once instead of as a single reusable function.</td>
</tr>
<tr>
<td></td>
<td>Low-quality code</td>
<td>○</td>
<td>Code quality is low, for example because it is unreadable, inconsistent, or violating coding conventions.</td>
</tr>
<tr>
<td></td>
<td>Multi-thread correctness</td>
<td>⬤</td>
<td>Thread-safe code is not correct and may potentially result in synchronization problems or efficiency problems.</td>
</tr>
<tr>
<td></td>
<td>Slow algorithm</td>
<td>⬤</td>
<td>A non-optimal algorithm is utilized that runs slowly.</td>
</tr>
<tr>
<td>Defect debt</td>
<td>Uncorrected known defects</td>
<td>⬤</td>
<td>Defects are found by developers but ignored or deferred to be fixed.</td>
</tr>
<tr>
<td>Design debt</td>
<td>Non-optimal decisions</td>
<td>○</td>
<td>Non-optimal design decisions are adopted.</td>
</tr>
<tr>
<td>Documentation debt</td>
<td>Outdated documentation</td>
<td>⬤</td>
<td>A function or class is added, removed, or modified in the system, but the documentation has not been updated to reflect the change.</td>
</tr>
<tr>
<td></td>
<td>Low-quality documentation</td>
<td>○</td>
<td>The documentation has been updated reflecting the changes in the system, but quality of updated documentation is low.</td>
</tr>
<tr>
<td>Requirement debt</td>
<td>Requirements partially implemented</td>
<td>○</td>
<td>Requirements are implemented, but some are not fully implemented.</td>
</tr>
<tr>
<td></td>
<td>Non-functional requirements not fully satisfied</td>
<td>○</td>
<td>Non-functional requirements (e.g. availability, capacity, concurrency, extensibility), as described by scenarios, are not fully satisfied.</td>
</tr>
<tr>
<td>Test debt</td>
<td>Expensive tests</td>
<td>○</td>
<td>Tests are expensive, resulting in slowing down testing activities. Extra refactoring actions are needed to simplify tests.</td>
</tr>
<tr>
<td></td>
<td>Lack of tests</td>
<td>○</td>
<td>A function is added, but no tests are added to cover the new function.</td>
</tr>
<tr>
<td></td>
<td>Low coverage</td>
<td>⬤</td>
<td>Only part of the source code is executed during testing.</td>
</tr>
</tbody>
</table>
---
\(^7\)https://atlasti.com
provides basic types of technical debt, with high-level definitions and a list of indicators per type. Using these types, we annotated sentences within issues, referring to existing debt or resolving debt. We read each sentence in issue summary, description, and comments. If a sentence or a group of sentences indicated a certain type of technical debt, we tagged it with that type and relevant indicators.
The issues were independently annotated by the first and second author. The differences between the two authors supported refining the types and indicators of technical debt from the original framework of Alves et al. [12]. For example, we added the indicator Requirements Partially Implemented to the requirement debt type. The refined classification framework that resulted from this step is presented in Table III. The Reused column refers to whether the indicators are reused directly from the study of Alves et al. ("●" symbol) or they were created inductively during the qualitative analysis ("○" symbol). The original framework of Alves et al., can be found in the replication package. The classification resulted in 152 annotated statements with different technical debt types and indicators, which are also available in the replication package.
To mitigate the risk of bias, we evaluated the level of agreement between the classifications of the two authors using Cohen’s kappa coefficient [13]; this is commonly used to measure inter-rater reliability. The calculated level of agreement between the two authors is 0.757 based on a sample consisting of 15% of all technical debt statements, which is considered excellent according to the work of Fleiss et al. [13].
Next, we revisited all identified technical debt to obtain information to answer RQ2 and RQ3. More specifically, for RQ2, we annotated text with information regarding the identification of technical debt items within the issue life cycle. Regarding RQ3, for each technical debt item, we read the related issue comments and the corresponding commit messages (see Section IV-C) to identify information on debt remediation. If indeed there was such information, we noted it down, as well as the person who resolved the item and the time between reporting and resolving it.
V. RESULTS
A. (RQ1) What types of technical debt are reported?
We found eight types of technical debt in issues: architecture, build, code, defect, design, documentation, requirement, and test debt. For each type we found one or more indicators. In the following paragraphs, we report on the associated indicators for each type, also providing a quote from actual issues to exemplify each indicator.
Architecture debt: problems that are architecturally significant, i.e. they are hard to change. Most of the debt in this type relates to the indicator Violation of Modularity.
“It would be good if these were moved into their own module...” - [Camel-4543]
Build debt: issues that make building (i.e. source code compilation to artifacts) harder or more time-consuming. Most of the identified build debt is caused by Over- or Under-Declared Dependencies.
“Avoid the redundant direct dependency on log4j by the components.” - [Camel-4331]
Code debt: issues in source code, which negatively influence the maintenance of software. Most of the code debt is caused by Low-Quality Code.
“This will lead to very unmaintainable code. We absolutely do not want to have nested retries for different contexts.” - [Hadoop-3198]
A few code debt items result from Slow Algorithm.
“#query() does O(N) calls LinkedList#get() in a loop, rather than using an iterator. This makes query O(N^2), rather than O(N).” - [Hadoop-8866]
Multi-Thread Correctness is another factor causing code debt.
“EnsureInitialized() forced many frequently called methods to unconditionally acquire the class lock.” - [Hadoop-9748]
The rest of the code debt is caused by Dead Code, Duplicated Code, and Complex Code.
“As we don’t use the CxfSoap component any more, it’s time to clean it up.” - [Camel-2535]
“I am concerned about the code duplication this brings.” - [Hadoop-6381]
“...can be simplified to the following so there aren’t so many return statements to track.” - [Hadoop-10169]
Defect debt: known defects that are deferred to be fixed. All defect debt items are caused by Uncorrected Known Defects.
“This works in 2.12.x onwards. Hunting this down on 2.11.x is low priority. End users is encourage to upgrade if they really need this.” - [Camel-6735]
Design debt: shortcuts or non-optimal decisions taken in detailed design. All design debt results from Non-Optimal Decisions.
Some architecture debt is caused by Using Obsolete Technology.
“The camel-atom component is using an ancient incubator version of abdera which will make it hard to work with camel-cxf.” - [Camel-4132]
"Instead of passing a long[] you should pass a struct that implements Writable." - [Hadoop-481]
"Extending the Trash API might be ok in the short term but does not sound too appealing from a long-term perspective." - [Hadoop-2815]
**Documentation debt:** when the software is modified, the documentation is not updated to reflect the changes or the quality of updated documentation is low. Most of this type of debt is caused by **Outdated Documentation**.
"The maven reports is just getting to old and intermixed with 1.x and trunk releases." - [Camel-1846]
The second indicator is **Low-Quality Documentation**.
"I agree to improve documentation to make it clear that..." - [Hadoop-12672]
**Requirement debt:** when the requirements specification is not in line with the actual implementation. Some requirement debt is caused by **Requirements Partially Implemented**.
"The only feature which we don’t support is correlated message groups. That requires a bit more work and also may complicated..." - [Camel-1669]
Another common cause concerns **Non-Functional Requirements Not Being Fully Satisfied**. In the example below, concurrency is not fully satisfied.
"Definition requires the implementations for its interfaces should be thread-safe. HarFsInputStream doesn’t implement these interfaces with thread-safe, this JIRA is to fix this." - [Camel-5587]
**Test debt:** shortcuts or non-optimal decisions taken in testing that negatively affect maintainability. Most test debt is caused by **Lack of Tests**.
"There are no XQuery specific tests." - [Camel-201]
The other major cause of test debt is **Low Coverage**.
"Some of the test code doesn’t check for correct error codes to correspond with the wrapped exception type." - [Hadoop-11103]
Finally, some test debt results from **Expensive Tests**.
"I see recent hadoop-hdfs test runs have been taking 2.5 hours. This one (new patch) was 45 minutes." - [Hadoop-11670]
Table IV presents an overview of technical debt types and indicators in the examined issues. We observe that code, documentation, and test debt are the three most common types (with 38.8%, 21.7%, and 18.4% respectively). Furthermore, the three most common indicators are **Low-Quality Code**, **Lack of Tests**, and **Outdated Documentation**.
Finally, since we annotated technical debt on the sentence level (instead of the issue level), an issue may contain more than one types of technical debt. Table V presents how many issues contain zero, one or more types of technical debt in issues. As we can see, 24 out of 117 issues (20%) that contain technical debt, contain more than one type. This validates our choice to analyze issues at the level of sentences; if we had performed the analysis at the level of issues, we would have missed the additional technical debt types per issue.
Eight types of technical debt are found in issue trackers: architecture, build, code, defect, design, documentation, requirement, and test debt. The three most common types are code, documentation, and test debt (i.e. 38.8%, 21.7%, and 18.4%). About one fifth of the issues that contain technical debt, contain more than one type.
B. (RQ2) When do software engineers identify technical debt?
We observed three distinct cases of technical debt being identified in issue trackers:
1) **Identifying technical debt before creating an issue (i.e. debt is the reason for creating the issue):** When developers spot an existing technical debt item in the system, they report it in an issue tracker to be resolved. For instance, a developer found low-quality code, which complicates debugging; thus, he/she created a new issue:
"If the user doesn’t setup the right camel context for the context component. The exception we got is misleading, we need to throw more meaningful exception for it." - [Camel-5714]
2) Identifying technical debt during code review: As explained in Section III, software engineers perform code reviews by creating and reviewing code patches in issue trackers. When a code reviewer identifies technical debt items in a code patch, he/she discusses it with other developers to determine, if the identified technical debt should be resolved or committed to the system. For example, during a code review, a developer found that a shortcut was taken. Thus, he/she commented on a patch: “The patch looks good to me... It would be better if we can add an upper limit for the size of the GSet.” - [Hadoop-9763]
3) Identifying technical debt after a patch is committed: Technical debt can exist in a patch but go undetected through the code review; after the patch is committed, a developer may notice the debt in the commit and report it. For instance, after a command patch was submitted, the developer commented: “We need to update the documentation with the new command.” - [Camel-8101]
<table>
<thead>
<tr>
<th>Project</th>
<th># Identified</th>
<th>Case 1</th>
<th>Case 2</th>
<th>Case 3</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>#</td>
<td>%</td>
<td>#</td>
<td>%</td>
</tr>
<tr>
<td>Hadoop</td>
<td>101</td>
<td>41</td>
<td>40.6</td>
<td>57</td>
</tr>
<tr>
<td>Camel</td>
<td>51</td>
<td>27</td>
<td>52.9</td>
<td>13</td>
</tr>
<tr>
<td>Total</td>
<td>152</td>
<td>68</td>
<td>44.7</td>
<td>70</td>
</tr>
</tbody>
</table>
To gain a better understanding of how technical debt is identified, Table VI presents the count of technical debt items for the three aforementioned cases. Clearly, the first and second cases represent the majority (44.7% and 46.1% respectively) in these projects. Compared with Camel, there is 30.9% more debt introduced in Hadoop with the second case and 18.6% less debt introduced with the third case. This means that more technical debt is identified during code reviews (on patches) than after the patch is committed in the system. Comparing the technical debt identified in the first and second cases, Table VII presents an overview on who reported the technical debt. We find that on average most of the debt is reported by other developers (i.e. 86.9%), and a small part is self-reported (reported by those that created it). Camel has a higher percentage of self-reported debt than Hadoop, but the vast majority of its debt is still reported by others (i.e. 70.8% versus 29.2%). This may mean that most developers create technical debt unintentionally.
<table>
<thead>
<tr>
<th>Project</th>
<th># Identified</th>
<th># Repaid</th>
<th>% Repaid</th>
<th>% Remaining</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hadoop</td>
<td>101</td>
<td>72</td>
<td>71.3</td>
<td>28.7</td>
</tr>
<tr>
<td>Camel</td>
<td>51</td>
<td>37</td>
<td>72.5</td>
<td>27.5</td>
</tr>
<tr>
<td>Total</td>
<td>152</td>
<td>109</td>
<td>71.7</td>
<td>28.3</td>
</tr>
</tbody>
</table>
C. (RQ3) How do software engineers resolve technical debt?
1) (RQ3.1) How much technical debt is paid off?
Table VIII presents the amounts and percentages of technical debt items that are identified and resolved. We can see that most of the identified technical debt is actually resolved in both Hadoop and Camel (i.e. 71.3% and 72.5%, respectively). This indicates that, when technical debt is reported in issue trackers, it will likely be resolved. In other words, most software developers are conscious of the importance of paying off technical debt items.
<table>
<thead>
<tr>
<th>Project</th>
<th># Identified</th>
<th># Repaid</th>
<th>% Repaid</th>
<th>% Remaining</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hadoop</td>
<td>101</td>
<td>72</td>
<td>71.3</td>
<td>28.7</td>
</tr>
<tr>
<td>Camel</td>
<td>51</td>
<td>37</td>
<td>72.5</td>
<td>27.5</td>
</tr>
<tr>
<td>Total</td>
<td>152</td>
<td>109</td>
<td>71.7</td>
<td>28.3</td>
</tr>
</tbody>
</table>
2) (RQ3.2) Who repays technical debt? As shown in Table IX, we distinguish between developers who create technical debt, those who identify it and other developers who participate in resolving it. We can see that most of the technical debt is repaid by those who identified it (i.e. 47.7%), and those who created it (i.e. 44.0%); while only 8.3% debt is resolved by other developers. This shows that developers take the responsibility to pay off most of the technical debt they identified or created themselves.
<table>
<thead>
<tr>
<th>Project</th>
<th># Repaid</th>
<th>% Repaid by</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creator</td>
<td>Identifiers</td>
<td>Others</td>
</tr>
<tr>
<td>Hadoop</td>
<td>72</td>
<td>36</td>
</tr>
<tr>
<td>Camel</td>
<td>37</td>
<td>12</td>
</tr>
<tr>
<td>Total</td>
<td>109</td>
<td>48</td>
</tr>
</tbody>
</table>
There are three cases of identifying technical debt in issue trackers: discovering existing debt and creating an issue for it, identifying debt in a patch during code review, or after the patch is committed in the system. Most of the technical debt is identified in the first and second cases. A small part of the debt is reported by its creators, while most is reported by other developers.
3) (RQ3.3) How long does it take to fix technical debt?
Fig. 2 shows the mean times, the median times, and the time distributions of technical debt repayment for the two projects. With a visual inspection, we see that the time spent to fix technical debt in Hadoop and Camel varies. We also observe that after the technical debt is reported (point zero in the y axis), most of the fixes happened in a short time compared to the average (67.0% of the debt is repaid in the first 100 hours).
Furthermore, we compare the time spent on resolving technical debt by different developers (Creators, Identifiers, and Others as discussed in Section V-C2). More specifically, we compare repayment time distributions between pairs of developers (e.g. between creators and identifiers) using the Mann-Whitney test [14] and Cliff’s delta [15] to determine the significance level and the effect size of the differences. The result is demonstrated in Table X. There are notable differences between Hadoop and Camel. In Hadoop, the repayment time of identifiers and others is longer than creators with statistical significance (p-values are 0.031 and 0.028 respectively). Moreover, the time difference between identifiers and others is at the margin of statistical significance (p-value is 0.080). According to the effect size, we observe that the difference between creators and identifiers is small, while the difference between identifiers and others is large. Thus, technical debt in Hadoop is paid back the quickest by creators, followed with a small margin by identifiers, followed with a large margin by others. In Camel, the situation is different as none of the time differences is statistically significant. We only observe that the repayment time of others is much longer (on average) than creators and identifiers.
Most of the identified technical debt in issue trackers is resolved (on average 71.7%). Debt identifiers and creators pay off most of the technical debt (47.7% and 44.0% respectively). The median time and average time to repay technical debt are 25.0 and 872.3 hours. In Hadoop, technical debt creators resolve it quicker than those who identify it or others.
VI. DISCUSSION
Various types of technical debt are detected in issues, and they are complementary to those detected in source code comments. Comparing the types of technical debt we identified in issues (RQ1) against those types detected in source code comments by Potrat and Shihab [3], we find requirement, defect, design, test, and documentation debt appearing in both. However, although documentation and test debt are among the three most common types in issues, they are the two least common types in source code comments. Meanwhile, design debt is the most common type in source code comments, but it is rather uncommon in issues. Finally, code, build, and architecture debt are only detected within issues. This means that the types of technical debt detected through issue trackers and source code comments have some overlap but they are also sufficiently distinct. Thus, using each source (issues or source code comments) has its strengths and weaknesses. Therefore, we argue that the two sources are complementary in detecting different types of technical debt. Researchers should take both sources into account to increase the completeness and accuracy of their detection tools.
Approaches are required to identify technical debt in all three different cases (existing debt, during code review or after committing a patch). Researchers should customize the identification approaches according to the characteristics of each case (see results of RQ1). For example the approach proposed by Dai and Kruchten [5] can work for the first case but not for the other two cases. Furthermore, the findings show that only 13.1% of technical debt is reported by those that created it. We suggest that researchers look into this phenomenon and interview practitioners to find out why they usually do not report their own debt. Furthermore, we advise practitioners who deliberately incur technical debt, to report it in the issue tracker. This would accelerate the repayment of these debt items, even if that is performed by other developers.
TABLE X
<table>
<thead>
<tr>
<th>Project</th>
<th>Average time spent on debt repayment (h)</th>
<th>p-value</th>
<th>Cliff’s delta</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Creators</td>
<td>Identifiers</td>
<td></td>
</tr>
<tr>
<td>Hadoop</td>
<td>128.0</td>
<td>1510.8</td>
<td>0.031</td>
</tr>
<tr>
<td>Camel</td>
<td>174.5</td>
<td>142.3</td>
<td>0.935</td>
</tr>
<tr>
<td></td>
<td>Creators</td>
<td>Others</td>
<td></td>
</tr>
<tr>
<td>Hadoop</td>
<td>128.0</td>
<td>5730.3</td>
<td>0.028</td>
</tr>
<tr>
<td>Camel</td>
<td>174.5</td>
<td>3104.3</td>
<td>0.851</td>
</tr>
<tr>
<td></td>
<td>Identifiers</td>
<td>Others</td>
<td></td>
</tr>
<tr>
<td>Hadoop</td>
<td>1510.8</td>
<td>5730.3</td>
<td>0.080</td>
</tr>
<tr>
<td>Camel</td>
<td>142.3</td>
<td>3104.3</td>
<td>0.463</td>
</tr>
</tbody>
</table>
Authorized licensed use limited to: University of Groningen. Downloaded on April 26,2022 at 11:25:52 UTC from IEEE Xplore. Restrictions apply.
Technical debt admitted in issues is resolved faster than in source code comments. Considering the results obtained from RQ3 in comparison with the study of Maldonado et al. [7], we find that most of the technical debt is repaid or removed (71.7% for debt within issues and 76.7% for debt in code comments). Furthermore, a great percentage of technical debt is repaid or removed by debt creators (44.0% for issues and 54.4% for source code comments). This indicates that developers consistently take care of SATD in both issues and source code comments, and debt creators often take the responsibility to resolve it.
Moreover, in Hadoop, it is noteworthy that debt creators repay technical debt the fastest, followed by identifiers, and other developers. This is consistent with the intuition that certain people are better able to resolve TD depending on their familiarity with the problem at hand; creators being the most familiar, followed by debt identifiers, and others. Therefore, we suggest that, in order to pay off TD faster, the repayment task should be assigned to debt creators. In addition, comparing the TD repayment between issues and source code comments [7], we observe that debt within issues is resolved much quicker than in comments (i.e. for Hadoop, median of 2.0 days versus 159.0 days; for Camel, 0.9 days versus 18.2 days). Therefore, we suggest that developers report TD that needs to be resolved immediately in issue trackers instead of commenting it in the source code.
VII. Threats to Validity
Threats to Construct Validity concern the correctness of operational measures for the studied subjects. Since only a small subset of issues in issue trackers contain technical debt statements [4], the sample (500 analyzed issues) may not represent the population (issues containing technical debt in general). To minimize this threat, the analyzed sample was obtained randomly from all collected issues.
Threats to Reliability concern potential bias from the researchers in data collection or data analysis. Since issues are written in natural language, they were identified and categorized manually. To counter the threat of researchers biasing the manual analysis, the first and second author annotated the issue sample independently, and then discussed any differences to reach consensus on the classification. Additionally, the level of agreement (Cohen’s kappa) was 0.757, which indicates high inter-rater agreement. Finally, as aforementioned all data are publicly available in the replication package.
Threats to External Validity concern the generalization of findings. In this study, we analyzed issues from two large open source projects, which both use JIRA as the issue tracker. Thus, our findings may be generalized to other open source Java projects of similar size and complexity that use JIRA; we can not claim any further generalization.
VIII. Conclusion
In this paper, we explored SATD in issue trackers. We found eight types of technical debt: architecture, build, code, defect, design, documentation, requirement, and test debt. Code, documentation, and test debt are the three most common technical debt found in issue trackers. Furthermore, there are three cases of identifying technical debt in issue trackers: discovering existing debt and creating an issue for it, identifying debt in a patch during code review, or after the patch is committed in the system. Most of the technical debt is identified in the first and second cases. Only 13.1% of technical debt is reported by debt creators. For technical debt repayment, we found that on average 71.7% of identified debt is repaid, and most of it is paid by debt identifiers and creators (i.e. 47.7% and 44.0%). The median time and average time of debt repayment are 25.0 and 872.3 hours respectively. Our results show that in Hadoop, the repayment time by creators is statistically significantly shorter than that of identifiers and others. However, in Camel, we did not observe statistically significant differences between different types.
References
|
{"Source-Url": "https://pure.rug.nl/ws/portalfiles/portal/214250787/Identification_and_Remediation_of_Self_Admitted_Technical_Debt_in_Issue_Trackers.pdf", "len_cl100k_base": 10582, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33378, "total-output-tokens": 11778, "length": "2e13", "weborganizer": {"__label__adult": 0.00035858154296875, "__label__art_design": 0.0002868175506591797, "__label__crime_law": 0.00034880638122558594, "__label__education_jobs": 0.0012645721435546875, "__label__entertainment": 5.835294723510742e-05, "__label__fashion_beauty": 0.00017690658569335938, "__label__finance_business": 0.0004532337188720703, "__label__food_dining": 0.000225067138671875, "__label__games": 0.0005664825439453125, "__label__hardware": 0.0006699562072753906, "__label__health": 0.0003724098205566406, "__label__history": 0.00017154216766357422, "__label__home_hobbies": 9.85264778137207e-05, "__label__industrial": 0.00024700164794921875, "__label__literature": 0.00027108192443847656, "__label__politics": 0.0002282857894897461, "__label__religion": 0.0003159046173095703, "__label__science_tech": 0.00848388671875, "__label__social_life": 0.0001246929168701172, "__label__software": 0.00627899169921875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00023758411407470703, "__label__transportation": 0.00032782554626464844, "__label__travel": 0.00013196468353271484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48248, 0.03737]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48248, 0.42329]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48248, 0.93056]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5276, false], [5276, 11598, null], [11598, 16295, null], [16295, 22575, null], [22575, 27401, null], [27401, 31234, null], [31234, 35982, null], [35982, 41268, null], [41268, 48248, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5276, true], [5276, 11598, null], [11598, 16295, null], [16295, 22575, null], [22575, 27401, null], [27401, 31234, null], [31234, 35982, null], [35982, 41268, null], [41268, 48248, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48248, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48248, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5276, 2], [5276, 11598, 3], [11598, 16295, 4], [16295, 22575, 5], [22575, 27401, 6], [27401, 31234, 7], [31234, 35982, 8], [35982, 41268, 9], [41268, 48248, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48248, 0.29864]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
79b42495e9b5284e7912a5afb678948e4365bec1
|
Troxy: Transparent Access to Byzantine Fault-Tolerant Systems
Bijun Li¹, Nico Weichbrodt¹, Johannes Behl¹, Pierre-Louis Aublin², Tobias Distler³, and Rüdiger Kapitza¹
¹TU Braunschweig ²Imperial College London ³Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
Abstract—Various protocols and architectures have been proposed to make Byzantine fault tolerance (BFT) increasingly practical. However, the deployment of such systems requires dedicated client-side functionality. This is necessary as clients have to connect to multiple replicas and perform majority voting over the received replies to outvote faulty responses. Deploying custom client-side code is cumbersome, and often not an option, especially in open heterogeneous systems and for well-established protocols (e.g., HTTP and IMAP) where diverse client-side implementations co-exist.
We propose Troxy, a system which relocates the BFT-specific client-side functionality to the server side, thereby making BFT transparent to legacy clients. To achieve this, Troxy relies on a trusted subsystem built upon hardware protection enabled by Intel SGX. Additionally, Troxy reduces the replication cost of BFT for read-heavy workloads by offering an actively maintained cache that supports trustworthy read operations while preserving the consistency guarantees offered by the underlying BFT protocol. A prototype of Troxy has been built and evaluated, and results indicate that using Troxy (1) leads to at most 43% performance loss with small ordered messages in a local network environment, while (2) improves throughput by 130% with read-heavy workloads in a simulated wide-area network.
I. INTRODUCTION
If high availability and resilience to arbitrary faults for networked services is required, Byzantine fault-tolerant state machine replication offers a solution. While initially Byzantine fault tolerance (BFT) was considered impractical, the seminal work of Castro and Liskov [1] enabled a stream of research that improved the performance, lowered the complexity, and reduced the resource usage of BFT [2], [3], [4], [5], [6]. Today, BFT can be considered as ready for custom deployments and, for example, is currently evaluated in the scope of permissioned blockchain infrastructures [7]. However, when it comes to user-facing offerings in open and heterogeneous environments – such as the Internet – BFT faces a major, so far largely overlooked hurdle: the client side. Here, standardized protocols such as HTTP and IMAP are dominant and users typically utilize diverse implementations. Thus, offering for example a BFT-enabled web server is infeasible as Byzantine fault tolerance is based on the assumption that a client contacts multiple replicas and performs a majority voting over the received replies to prevent the processing of faulty replies. Of course, by means of extending the HTTP protocol and adding custom software to browsers [8] one could consider the use of BFT, but this would address only one of many standardized protocols. Instead of this more or less unrealistic endeavor, we propose to deploy BFT in a client-transparent fashion for different kinds of protocols.
In this paper we present Troxy, a system which achieves client-transparent BFT by relocating traditional client-side BFT functionality such as connection handling, request distribution, and majority voting to the server side, co-located to the replicas. This is enabled by relying on a trusted subsystem that can only fail by crashing and implements basic message handling, majority voting, and transport encryption: the Troxy. At implementation level, Troxy utilizes trusted execution support as offered by Intel’s Software Guard Extensions (SGX) [9], [10].
At its core, SGX provides a set of new instructions that allows user-level code to allocate private and secure regions of memory called enclaves. By executing the application’s code within enclaves, SGX provides CPU-enhanced application security and protects the enclaves from being manipulated by malicious privileged code or even hardware attacks such as memory probes. Hence, the functionality of Troxy is guaranteed to be trustworthy even in the presence of Byzantine faults in the surrounding replicas.
Based on trusted execution, Troxy offers a trusted proxy to clients that can be accessed via the original legacy protocol. Once a Troxy instance receives a client request, it forwards the request to the BFT framework, which in turn orders the request, executes it, and forwards the computed replies to the requesting Troxy. As soon as the responsible Troxy instance has received enough replies, it performs a voting over the replies and returns the correct result to the client.
As a malicious replica may intercept the communication of its Troxy, we ensure that the replica cannot alter messages without being detected: Communication between clients and Troxy instances is protected via secure, encrypted connections, which are the norm for more and more Internet-based services [11]. In addition, messages exchanged between Troxies and replicas are authenticated using common message certificates, as they are prevalent for BFT. Although immune to arbitrary or malicious behaviors, it is still possible that a Troxy instance crashes or is disconnected from its clients, and as a consequence becomes unavailable. This case is equivalent to a failing service replica in commodity infrastructures and can be handled by DNS round-robin or load-balancing appliances that enable a fail-over to another Troxy instance.
Acknowledgments: The authors thank the anonymous reviewers for their valuable feedback. This research was supported by the German Research Council (DFG) under grant no. K.A 3171/1-2 and DI 20971/2 (“REFIT”). Funding was received from the EU’s Horizon 2020 research and innovation programme under grant agreements 645011 (SERECA) and 690111 (SecureCloud).
Troxy is specifically designed for user-facing Internet-based services and therefore offers tailored support for read-heavy workloads and distant clients. In particular, this is achieved by enabling caching for Troxy instances. To not reduce the consistency guarantees of state machine replication, Troxy ensures linearizability [12] by offering a managed cache. In the context of ordering write requests, a quorum of Troxy caches is consulted and the affected data is invalidated. This way, cached read requests can be directly answered by consulting a quorum of $f + 1$ Troxy instances. Otherwise, requests are ordered via the regular BFT protocol.
We implemented Troxy on top of Hybster [13], a hybrid BFT system that already features a trusted subsystem to reduce the number of replicas to $2f + 1$. However, Troxy builds an independent extension that can be applied to other hybrid systems featuring a trusted subsystem [14], [15] as well as traditional BFT agreement protocols.
This paper makes the following contributions:
- It introduces the concept of making BFT systems transparent to clients by utilizing trusted hardware to implement a substitute of the client-side BFT library on the server side (Section II).
- It presents Troxy, which uses Intel SGX to provide transparent access to a BFT system while ensuring its integrity and security (Section III).
- It introduces a fast managed cache for read-heavy workloads that transparently switches to traditional request ordering in case of write contention (Section IV).
- It implements a prototype of Troxy that is fully transparent to clients, secure, and provides the read-cache optimization without sacrificing linearizability (Section V).
In addition, Section VI presents detailed evaluation results for Troxy gained from experiments with both microbenchmarks as well as a web server. Finally, Section VII summarizes related work and Section VIII concludes the paper.
II. BACKGROUND AND PROBLEM STATEMENT
In this section, we provide background on how the roles of clients differ between non-fault-tolerant and crash-tolerant systems on the one side, which currently constitute the vast majority of systems used in production, and BFT systems on the other side, which in recent years have been widely studied and are now ready to be applied in practice. Based on this comparison, we then discuss the implications of moving from existing system architectures to BFT replication from a client-implementation perspective, thereby explaining the inherent difficulties that so far prevented the migration to BFT for many real-world use-case scenarios. Finally, we outline our approach to address these problems with Troxy by introducing a trusted proxy component at the server side that allows legacy client implementations to remain unchanged.
A. Clients in Different System Architectures
The means necessary for a client to access a network-based service in general depend on how the service is implemented at the server side. As illustrated in Figure 1a, in the most simple case of a non-fault-tolerant system containing only a single server, a client first queries a location service (e.g., DNS) to obtain the server’s address and then directly establishes a connection to the server. Using this connection, both sides then subsequently interact with each other based on a specific protocol, for example, HTTP for a web service. Many services rely on secure channels to protect the client-server communication. These channels, such as TLS, handle authentication and encryption/decryption of the exchanged data.
In systems where the server side is replicated to provide resilience against crashes, each client usually also only maintains a connection to a single server at a time (see Figure 1b). To prevent bottlenecks, such systems typically ensure that client connections are distributed across the different servers available. One way to achieve this in a transparent manner for the client is, for example, to introduce a load balancer [16], possibly integrated with the location service. Such a mechanism also ensures that in the event of a replica crash the affected clients are automatically reassigned to other replicas once they try to reconnect to the service.
In contrast to clients in unreplicated or crash-tolerant systems, clients in BFT systems not only need to implement the service’s protocol but also require a voting component for safely accessing the server side [1], [2], [3], [4], [6], [13]. This is due to the fact that a BFT client cannot trust a single replica, because the replica might be faulty and therefore possibly ignores requests or provides erroneous replies. To address this issue, as illustrated in Figure 1c, BFT clients do not only contact a single replica but instead establish connections to all replicas in the system. As a consequence, they are able to verify the correctness of a result by comparing the replies of different replicas. This means that, although the specific communication patterns of clients and replicas vary between BFT systems, in general a BFT client requires knowledge about the identity of replicas in order to be able to distinguish their replies. Usually, such information is provided to the client at configuration time. Many BFT systems exploit this knowledge to establish a dedicated shared secret between each client and each replica, which is then used to authenticate the exchanged messages and therefore, amongst other things, allows a client to verify that a received reply indeed originates from the presumed replica.
B. Problem Statement
Most of the systems and services in production today are either unable to tolerate faults or are only resilient against crashes, resulting in outages or unwanted behavior in situations where Byzantine faults actually occur [17], [18], [19]. One reason for this, despite the recent advances in BFT research, is the fact that there is a plethora of legacy client implementations for which migrating to BFT would produce significant costs. On the one hand, this includes the efforts required for modifying existing client libraries in order to allow them to tolerate Byzantine faults; even worse, in many cases the necessary changes are not limited to the client itself because, as discussed in Section II-A, BFT clients have to be aware of both the identity as well as the number of replicas, and providing this information to clients is usually not straightforward if the overall system has not been designed to publicly reveal such knowledge. On the other hand, migrating to BFT also comes with an increased network and processor usage at runtime due to the client’s need to receive, authenticate, and compare multiple replies for each operation. Such an increase in resource usage especially poses a problem to clients with low-bandwidth connections or limited processing power. This, for example, includes clients running on mobile devices. To summarize, making existing client implementations ready for BFT does not only lead to costs for the actual migration, but also results in runtime overhead, which explains why this step so far has not been taken for many real-world systems.
C. The Troxy Approach
To circumvent the previously described problems associated with adding and operating BFT mechanisms at the client side, our approach is to introduce a trusted proxy, or Troxy for short, into the system that acts as a representative of the client at the server side and allows legacy client implementations to benefit from Byzantine fault tolerance without requiring modifications. Furthermore, due to the fact that the Troxy is transparent to the client and handles all BFT-related tasks such as reply authentication and voting, this solution does not incur additional network or processor usage at the client.
As shown in Figure 2, the unmodified client in a Troxy-backed system only establishes a connection to a single Troxy instance, which then handles the communication with the replicas in the system for all of its clients. If at one point a Troxy instance fails, the affected clients reestablish their connections to the service as they would do in a traditional system, for example using a location service (see Section II-A), thereby switching to different Troxies. In contrast to all other replica components, which are untrusted and may fail in arbitrary ways, Troxies are trusted and assumed to only fail by crashing. To justify this trust, we run each Troxy inside the trusted subsystem that is provided by modern processors based on technologies such as Intel SGX [9], which guarantees the integrity of the executed program code. In addition, to protect the communication of a client with the service, a Troxy supports the establishment of secure channels using TLS.
In summary, by offering clients transparent access to BFT systems, our approach greatly facilitates the migration of existing services to Byzantine fault tolerance, because legacy client implementations can be reused without modifications or additional resource overhead. At the same time Troxy requires only moderate integration effort into the underlying BFT system at specific extension points.
III. Troxy System Design
In this section, we present details on the design of a Troxy-backed BFT system in general and on the trusted proxy in particular. For clarity, we postpone the discussion of the fast-read optimization to Section IV.
A. Overview
Figure 3 shows an overview of the different components of a Troxy and illustrates how they conceptually interact with each other and with other system components outside the Troxy. When a client issues a request to the service through a secure channel, the Troxy first decrypts the message (1). For a read request, the Troxy then executes the fast path for reads (2) and in case of success immediately returns the cached reply (see Section IV). For a write request or in case of a read-cache miss, the Troxy forwards the client request to its local replication logic to invoke the BFT agreement protocol (3), thereby itself assuming the role of a BFT client. Having received the request, the BFT protocol distributes the request to the other replicas in the system and ensures that all correct replicas execute all client requests in the same order. After processing the request, each replica returns the corresponding reply to the replica the client is connected to, where the Troxy’s voting component then determines the correct result by comparing the replies of different replicas (4).
To tolerate \( f \) faults, the voter waits until having obtained \( f + 1 \) matching replies from different replicas before returning the result to the client (5) as this guarantees that at least one of the replies stems from a non-faulty replica and is therefore correct. In summary, by acting as a BFT client for the replication protocol a Troxy already assumes all the additional responsibilities necessary to access a BFT service, freeing the client from the need to perform these tasks itself.
B. System Model
The Troxy approach relies on a hybrid fault model [13], [14], [15], [20], [21], [22], [23] in which a system is a collection of components with different resilience characteristics. All Troxies in the system are assumed to either operate correctly.
Fig. 3. Overview of Troxy components and their interactions.
or to fail by crashing; in particular, this means that once a client receives a result from a Troxy over a secure channel, the client can trust the result to be correct. In later sections, we discuss how we ensure this trustworthiness for Troxies based on minimizing a Troxy’s trusted computing base (see Section III-C) and utilizing Intel SGX (see Section V).
Apart from Troxies, all other replicas and network components in the system may fail in arbitrary ways. The number of servers required in a Troxy-backed system to tolerate such Byzantine faults depends on the BFT replication protocol executed among replicas: using a traditional BFT protocol [1], [4], [24], [25], a minimum of \(3f + 1\) replicas are necessary to tolerate up to \(f\) faults. If a replication protocol itself makes use of trusted components [13], [14], [15], [20], [21], [22], [23], this number can be reduced to \(2f + 1\) replicas. Replica components located outside the Troxy do not trust each other. Components of different replicas communicate by exchanging authenticated messages over the network. If a correct component receives a message it cannot verify, the component discards the message.
C. Minimizing the Trusted Computing Base
Relying on a hybrid fault model, it is crucial to keep the trusted components as small as possible [21], because the more complex a component, the more likely it is to fail in an arbitrary way, for example, as the result of a program error. To justify the trust put in the Troxy, we therefore minimize its complexity by only performing those tasks inside the Troxy that are critical and actually require to be trusted; in contrast, all noncritical tasks are executed outside the Troxy in the untrusted part of the replica. In essence, this leads to a design where the Troxy is basically a library whose functionality is used by the untrusted replica part via method calls.
With regard to client communication, this separation of critical and noncritical tasks means that most of the network-connection handling can be performed outside the Troxy. In particular, this includes the management of connected sockets, the handling of worker threads operating on these sockets, as well as the execution of the actual send and receive operations. Overall, there are only three major critical tasks the trusted Troxy needs to perform: (1) when the client connects to a replica, the replica’s Troxy controls the establishment of the secure channel and afterwards stores the associated session key in order to prevent the untrusted part of the replica from being able to impersonate the Troxy. (2) When the client sends a request to the server over the secure channel, the untrusted part of the replica receives the request message. However, the Troxy is the only one to be able to decrypt the request using the session key. Having decrypted the message, the Troxy then checks its integrity and creates a BFT-protocol request in which it includes the client request as payload. Finally, the Troxy authenticates the BFT request using the method expected by the underlying BFT replication protocol (e.g., a keyed-hash message authentication code (HMAC) in our implementation) before handing over the request to the untrusted part of the replica. This way, by atomically decrypting the client request and creating a corresponding authenticated BFT request, the Troxy ensures that the request cannot be altered by the untrusted replica part without being detected. (3) After the request has been executed, the Troxy collects the replies provided by different replicas, verifies the authenticity of these replies, and then compares them to determine the correct result. Based on this result, in a final step, the Troxy creates a reply to the client and encrypts this message using the session key of the client’s secure channel. The actual transmission of the reply is performed outside the Troxy in the untrusted part of the replica. However, due to the untrusted replica part not having access to the session key, it is unable to manipulate the reply without the client detecting such a modification.
D. Fault Handling
When a Troxy returns a reply to the client, the client can trust the reply to be correct. However, in case of faults there can be situations in which a client at first does not receive a reply to its request, for example, due to the server hosting the Troxy having crashed. To handle such scenarios where a Troxy ceases to operate, we exploit the fact that clients of user-facing services typically are already equipped with a mechanism to automatically reconnect to the service once their existing connections time out, for example, relying on an external location service to assist in the failover to another replica (see Section II-A). As soon as the client reaches a non-faulty replica, after retransmitting the request, the client will eventually receive a corresponding reply from the service.
Using the same failover mechanism, clients are also able to tolerate scenarios in which the untrusted part of a replica, which performs the actual send and receive operations on network connections (see Section III-C), fails to deliver the correct reply provided by the Troxy. Depending on the nature of the fault, in such case the client either detects a corrupted channel (if the untrusted part sends data that is not encrypted with the Troxy’s session key) or experiences a timeout (if the untrusted part sends no data at all). Either way, the client can solve the problem by reconnecting to the service.
In contrast to the Troxy, the untrusted part of a replica may fail in arbitrary ways. Apart from the scenarios discussed above, handling these kinds of faults mainly lies in the responsibility of the underlying BFT replication protocol, as it is the case in traditional BFT systems. The fact that a Troxy, while acting as a BFT client, is co-located with a BFT replica has no effect on
the internal fault-handling procedures of the protocol. Replay attacks are prevented by the secure channel that connects the client with the Troxy. By design, each endpoint will never accept the same chunk of encrypted data twice.
E. Introducing Byzantine Fault Tolerance Using Troxies
In the following, we illustrate the steps necessary to migrate an existing user-facing service that is implemented by a crash-tolerant system to a Troxy-backed BFT system. As an example, we consider a RESTful web service that originally relies on Paxos [26] for fault tolerance and is accessed by a wide spectrum of heterogeneous clients via HTTPS.
The first step to make such a service Byzantine fault tolerant using our approach is to select a BFT replication protocol and to integrate its server-side implementation with the Troxy. This task is greatly facilitated by the Troxy essentially being a library that needs to be invoked at a small number of well-defined locations in the replica logic in order to be able to establish secure channels, to safely translate incoming client requests into BFT requests, and to determine and encrypt the final replies (see Section III-C). On the other hand, the most complex parts of a BFT protocol implementation, such as the ordering and view-change protocols, are left unmodified.
In a second step, the server-side application logic of the web service must be ported from the original crash-tolerant protocol to the BFT protocol. For this task, it is usually possible to benefit from the fact that BFT protocols and crash-tolerant protocols such as Paxos or Raft [27] in general provide comparable interfaces and pose similar requirements on applications, for example, with regard to execution determinism or the ability to create/apply checkpoints of their state.
To enable the Troxy to communicate with clients, in a final step, the Troxy must be made aware of the message format used by the service for requests. In this context, there is no need for the Troxy to fully parse and understand incoming requests. Instead, it is sufficient for the Troxy to identify request boundaries in order to be able to properly store the incoming client request in the newly created BFT request; for replies, the Troxy usually can simply extract the payload contained in the verified BFT result and return it to the client. For many communication protocols, including HTTP, identifying message boundaries is straightforward due to messages carrying information about their own length.
The steps discussed above have shown that the migration overhead is small if a service is already resilient against crashes. However, with Troxy providing transparent access to BFT systems, even for unreplicated services that so far offer no fault tolerance at all, the changes necessary to integrate Byzantine fault tolerance are limited to the location service (i.e., to make it replication aware) as well as the server-side implementation. In contrast, there is no need to modify the potentially large number of diverse client implementations.
IV. FAST-READ CACHE
Troxy features a managed fast-read cache that not only validates cache entries when processing regular read requests, but also removes entries from the cache if a write request is about to outdate cached data. As a key benefit, by invalidating cache entries while processing write requests and before their effects are emitted to clients, Troxy is able to maintain consistency guarantees offered by the underlying BFT protocol. In the following, we present details on Troxy’s fast path for reads using the example of a BFT system that is based on a hybrid fault model and therefore can tolerate $f$ faults with $2f + 1$ replicas, as it is the case for our prototype implementation (see Section V-B).
A. Protocol
In line with previous research [3], [4], [5], our fast-read optimization assumes that read and write requests can be distinguished before executing them and that it can be determined which part of the state a request is about to access or modify. The described functionality is executed inside a Troxy instance and therefore trusted with the exception of functions that are provided by the surrounding replica.
Our fast-read cache utilizes the processing of a write request to remove an outdated entry from the cache before the effects of the write are visible to any client, that is, before the reply to the write is returned to its client. To ensure this, we make two important changes to introduce the cache: (1) We modify the voter to only take the reply of another replica into account if the reply is authenticated by the other replica’s Troxy. As a consequence, this requirement forces a replica to hand over a reply to its local Troxy in order for the reply to have an impact on the final result, thereby giving the Troxy the opportunity to learn about a write and to subsequently invalidate an outdated cache entry. To authenticate a local reply, a Troxy computes an HMAC that is based on a shared secret, which is known amongst all Troxies, and an identifier specific to each Troxy instance. (2) We extend the replies provided by local replicas to not only contain the application’s result but also (a hash of) the original request in order to allow a Troxy to identify the cache entry to invalidate. As before, a Troxy only returns a result to the client after having received $f + 1$ matching replies (which now include the request) from different replicas.
With regard to the fast-read cache, this means that when a write reply reaches this point, it is ensured that a majority of replicas in the system have invalidated the associated cache entry.
As shown in Figure 4, if a Troxy receives a read request from a connected client it first determines if the fast-read cache can be utilized by calling check_cache that takes the client-provided request as an input. Next, it checks if the cache contains data that answers the request. If not, the request is ordered and executed as any other request. Otherwise, a set of $f$ remote Troxies is randomly chosen and queried using get_remote_cache_entry$(r, req)$. This function generates an authenticated message for replica $r$ to query its Troxy about the currently processed request, which is handed over to the untrusted replica code for transmission. On the remote side, the receiving Troxy instances validate the message and then check if the requested data is cached (see L. 21, Figure 4). The request and associated reply, both authenticated, are returned...
to the initial requesting Troxy. Next, it is validated if all $f$ request and reply pairs match the local data. If this is the case, the reply is returned to the client and a successful cache lookup has been performed. In case of a mismatch, which for example can be the result of concurrent write requests or actions performed by malicious replicas (e.g., the replay of a stale reply), the read request is ordered in the common way.
Note that a more aggressive use of hashes can reduce the amount of exchanged data. In addition, timeouts might be used to detect unresponsive replicas.
B. Ensuring Consistency and Resilience to Performance Attacks
In scope of the implemented prototype we considered a system that relies on an hybrid fault model that requires only $2f + 1$ replicas and offers strong consistency. The aim of Troxy and its fast-read cache is to preserve the guarantees offered by the underlying protocol. This is achieved by immutable entangling the maintenance of the fast-read cache with the protocol execution, so an attacker cannot diverge replicas and Troxies to make conflicting statements. With a total amount of $2f + 1$ replicas in the hybrid fault model, completing a write operation takes a quorum of $f + 1$ replicas for providing authenticated replies. Since reply authentication is done by Troxy inside the trusted subsystem, these $f + 1$ replicas must have deleted the related entry in their fast-read cache before the reply becomes visible to any client. Meanwhile a successful fast-read operation also needs $f + 1$ identical entries, meaning that at least $f + 1$ replicas must still contain a matching entry in their caches. This is not possible as both quorums intersect by one replica and its trusted Troxy is responsible for providing the necessary response to either side. By doing so, a successful fast-read is ensured to reflect the state of the latest write. One option for an attacker would be to roll-back the trusted subsystem by a reboot, however in this case the cache would simply lose its entire state and queries are returned unanswered, which will result in the execution of the underlying protocol. In general the forwarding of a reply due to a write request always result in a cache invalidation but not in a cache update.
This is necessary as the local Troxy can confirm the origin of the reply but not its correctness, thus a faulty replica should not be able to pollute the cache.
V. IMPLEMENTATION
Below, we present our prototype of a Troxy-backed system, providing details on the SGX-based Troxy implementation as well as its integration with the BFT protocol Hybster [13].
A. Troxy Implementation
Our Troxy implementation is written in C/C++ and relies on Intel’s Software Guard Extensions (SGX) [9] and its SDK [28] to achieve isolation between the trusted and the untrusted parts of a replica. The Troxy runs inside a trusted execution environment provided by SGX, a so-called enclave, that is protected by the CPU via transparent memory encryption and integrity checking. To enter and exit an enclave, the only possible way is to go through an enclave interface which defines the entry points and the maximum number of concurrent threads allowed at any point in time inside the enclave. An enclave call (ecall) is needed for calling the enclave functions, while an outside call (ocall) is explicitly used for calling from an enclave to the untrusted environment. An ecall leads to executing a TLB flush, switching to a trusted stack located inside the enclave, copying the parameters from untrusted memory and calling the trusted function. Similarly, an ocall causes a TLB flush, switching back the untrusted stack, moving parameters out of the trusted memory, and exit of the enclave. Due to their high overhead, it is best practice to minimize enclave transitions.
Troxoy implements ecalls for data transfer between enclaves and the untrusted environment as well as for data processing inside enclaves. In order to keep the interface small, Troxy defines only 16 ecalls and no ocalls under a security-aware programming model. More precisely, these ecalls have been manually verified and are hardened to prevent possible attacks such as Iago attacks [29] or time-of-check-to-time-of-use attacks [30]. For example, the data transfer between the untrusted environment and enclaves requires additional copies of the message buffers. A read buffer is always directly copied into the enclave to avoid time-of-check-to-time-of-use attacks; in contrast, the copy of a write buffer can be done outside the enclave to achieve better performance.
---
Fig 4. Cache lookup when processing read requests.
// Cache lookup in case of voting Troxy instance
15 // Cache lookup in case of remote Troxy instance
16 upon call get_local_cache_entry(req) do
17 reply := cache.get(id(req))
18 return (req,reply)
19
20 // Cache lookup in case of voting Troxy instance
21 upon call check_cache(req) such that req is READ do
22 reply := cache.get(id(req))
23 if reply is not null // request is cached
24 replicas := choose_f_replicas() // select f remote caches
25 rc := 0 // set of remote cached replies
26 // collect cache entries of f remote replicas
27 ∀r ∈ replicas, rc.add(get_remote_cache_entry(r,req))
28 // remote caches match local cache
29 if ∀(r_eq, u_eq) ∈ rc, (id(r_eq), u_eq) = (id(req),reply)
30 return reply // fast read succeed
31 else return null // mismatch amongst caches
32 else return null // cache miss
33
34 // Cache lookup in case of remote Troxy instance
35 upon call get_remote_cache_entry(req) do
36 reply := cache.get(id(req))
37 if reply is not null // request is cached
38 replicas := choose_f_replicas() // select f remote caches
39 rc := 0 // set of remote cached replies
40 // collect cache entries of f remote replicas
41 ∀r ∈ replicas, rc.add(get_remote_cache_entry(r,req))
42 // remote caches match local cache
43 if ∀(r_eq, u_eq) ∈ rc, (id(r_eq), u_eq) = (id(req),reply)
44 return reply // fast read succeed
45 else return null // mismatch amongst caches
46 else return null // cache miss
47
48 // Fast-write operation takes a quorum of f replicas
49 Troxy inside the trusted subsystem, these always result in a cache
50 single state and queries are returned unanswered, however in this case the cache would
51 the necessary response to either side. By doing so, a successful write operation
52 like PBFT [1] that feature a read optimization where $2f + 1$
53 replicas are queried, a client can only utilize the result if all
54 replicas match. Thus, faulty replicas can return wrong results
55 and frequently prevent a successful read optimization. In case of Troxy we are in a similar situation, as we query $f$ randomly
56 Troxies for their cache entries. However, additionally we measure the cache miss rate inside the Troxy. If the miss
57 rate reaches a configurable system constant, the fast read
58 optimization is avoided in favor of a traditional protocol run.
9 As shown in the evaluation this also addresses the case of write
10 contention, where a lot of cache misses occur due to conflicts.
11
12 // Fast-read operation also needs
13 the reply becomes visible to any client. Meanwhile a successful
14 operation takes a quorum of $f + 1$ replicas for providing
15 authenticated replies. Since reply authentication is done by
16 Troxy inside the trusted subsystem, these $f + 1$ replicas must have deleted the related entry in their fast-read cache before the reply becomes visible to any client. Meanwhile a successful fast-read operation also needs $f + 1$ identical entries, meaning that at least $f + 1$ replicas must still contain a matching entry in their caches. This is not possible as both quorums intersect by one replica and its trusted Troxy is responsible for providing the necessary response to either side. By doing so, a successful fast-read is ensured to reflect the state of the latest write. One option for an attacker would be to roll-back the trusted subsystem by a reboot, however in this case the cache would simply lose its entire state and queries are returned unanswered, which will result in the execution of the underlying protocol. In general the forwarding of a reply due to a write request always result in a cache invalidation but not in a cache update.
This is necessary as the local Troxy can confirm the origin of the reply but not its correctness, thus a faulty replica should not be able to pollute the cache.
V. IMPLEMENTATION
Below, we present our prototype of a Troxy-backed system, providing details on the SGX-based Troxy implementation as well as its integration with the BFT protocol Hybster [13].
A. Troxy Implementation
Our Troxy implementation is written in C/C++ and relies on Intel’s Software Guard Extensions (SGX) [9] and its SDK [28] to achieve isolation between the trusted and the untrusted parts of a replica. The Troxy runs inside a trusted execution environment provided by SGX, a so-called enclave, that is protected by the CPU via transparent memory encryption and integrity checking. To enter and exit an enclave, the only possible way is to go through an enclave interface which defines the entry points and the maximum number of concurrent threads allowed at any point in time inside the enclave. An enclave call (ecall) is needed for calling the enclave functions, while an outside call (ocall) is explicitly used for calling from an enclave to the untrusted environment. An ecall leads to executing a TLB flush, switching to a trusted stack located inside the enclave, copying the parameters from untrusted memory and calling the trusted function. Similarly, an ocall causes a TLB flush, switching back the untrusted stack, moving parameters out of the trusted memory, and exit of the enclave. Due to their high overhead, it is best practice to minimize enclave transitions.
Troxoy implements ecalls for data transfer between enclaves and the untrusted environment as well as for data processing inside enclaves. In order to keep the interface small, Troxy defines only 16 ecalls and no ocalls under a security-aware programming model. More precisely, these ecalls have been manually verified and are hardened to prevent possible attacks such as Iago attacks [29] or time-of-check-to-time-of-use attacks [30]. For example, the data transfer between the untrusted environment and enclaves requires additional copies of the message buffers. A read buffer is always directly copied into the enclave to avoid time-of-check-to-time-of-use attacks; in contrast, the copy of a write buffer can be done outside the enclave to achieve better performance.
To enforce the validity of enclaves, Intel provides a remote attestation service [9]. In a nutshell, a hash of the memory pages of the enclave is securely computed and sent to the remote attestation service so that the user can obtain a proof that the enclave has been initialized correctly. Once the enclave has been correctly attested it is possible to provision it. Any cryptographic key and secret, such as the private key used by Troxy to initialize a secure connection with the clients, can be securely sent to the enclave during the provisioning phase.
The enclave code and data is stored in the Enclave Page Cache (EPC), a specific region of memory protected from untrusted accesses. In the current implementation of Intel SGX, this memory area has a maximum size of 128MB. Accessing memory beyond the size of the EPC results in costly paging, as the pages need to be encrypted and integrity-protected before being evicted to main memory. As this operation incurs a high performance overhead [31], we limit memory allocations to keep the memory footprint as small as possible. Furthermore, to avoid additional ocalls and paging [32], the Troxy can store data in an encrypted manner outside the enclave. When it needs to be accessed, it is directly read from the untrusted memory and validated by comparing it against a hash securely stored inside the Troxy.
Finally, Troxy provides bidirectional TLS authentication to all messages exchanged between clients and replicas. For this purpose, Troxy uses the TaLoS [33] library, which exposes a TLS interface to existing application while securely executing the TLS logic inside an Intel SGX enclave. Note that we run it in a completely encapsulated manner: there are no ocalls nor ocalls between TaLoS and the untrusted environment.
### B. Troxy-backed Hybster
To provide fault tolerance, our prototype implementation relies on Hybster [13], a BFT replication protocol that is based on a hybrid fault model and therefore only requires $2f + 1$ replicas to tolerate $f$ Byzantine faults. Hybster is implemented in Java and uses Intel SGX to realize a trusted subsystem for message authentication. It achieves high performance via parallelization, where the performance scales well along with the number of NICs and CPU cores. The trusted subsystem of Hybster is also used by Troxy for trusted authentication upon internally exchanged messages during the ordering phase. In our implementation the interaction between the protocol running in the untrusted part of the replica and the SGX enclave is handled via the Java Native Interface (JNI).
Hybster is a leader-based BFT protocol: a special node is in charge of proposing an ordering on the requests received by the clients. Figure 5 shows the message flow in the resulting Troxy-backed system. Compared with the original Hybster (see Figure 5a), introducing the Troxy adds one message delay for a client that is connected to Hybster’s leader replica (see Figure 5b). In this extra phase, the corresponding Troxy collects and compares the replies to the client’s request in order to determine the correct result. For clients connected to servers hosting Hybster followers, an additional phase is necessary to transmit the request to the leader, as only the leader is able to initiate the agreement process for requests (see Figure 5c).
Note that for a setting in which the replicas of a system are hosted in different fault domains inside the same data center (e.g., different racks with independent power and network supply [34]), the additional messages only have a minor impact on the overall latency experienced by the client.
Apart from highlighting individual message flows, Figure 5 also illustrates another important difference between traditional BFT systems and a Troxy-backed BFT system: with the Troxy performing reply voting at the server side, the client receives only a single reply per request. In practice, this approach has several key advantages: First, in a typical setting where clients are connected to the service over a wide-area network, less data has to be sent over long-distance links, which is especially beneficial for low-bandwidth clients. Second, during periods of unstable (wide-area) network conditions it improves the response time of the service due to the latency experienced by the client no longer depending on the arrival of the $f + 1$ slowest (normal request) or $2f + 1$ slowest (read optimization) matching reply. Third, and most important, it makes the BFT replication system transparent to clients.

In this section we evaluate the performance of Troxy compared to Hybster using both microbenchmarks and an HTTP service. The results show that: (1) For ordered small-payload messages in a local network, Troxy has an overhead of at most 43% due to its extra communication steps (see Figure 5) and trusted environment transitions. (2) For larger messages with network delay, Troxy improves the performance compared to Hybster by at most 70%. (3) For read-heavy workloads with network delay, the fast-read cache optimization improves the throughput by 130% even in the presence of conflicting write requests. (4) When considering an HTTP service with network delay, Troxy can almost hide the replication cost, allowing clients to observe similar latency as for a non-replicated service.
### A. Experimental Setup
The measurements are conducted on a cluster of five identical machines connected via four 1 Gbps Ethernet NICs. Each machine is equipped with an SGX-capable Intel Core i7-6700 quad-core processor running at 3.4 GHz with Hyper-Threading activated as well as 24 GB of memory. Three machines are dedicated to the replicas (hence we consider $f = 1$ faults) while the two remaining ones are running as clients. All the machines are running 64-bit Ubuntu 16.04 with a Linux kernel 4.4.0, OpenJDK 1.8 and the Intel SGX SDK v1.9. We compare the performance of our Troxy-backed Hybster variant with the original Hybster protocol, noted as BL (for baseline).
### B. Security Analysis
In this section we analyse the security of Troxy.
**Performance attacks:** A malicious replica could try to return old cache entries in the case of the fast-read cache optimization. As a result the fast read would fail, slowing down the protocol. As discussed in Section IV-B, Troxy selects $f$ random replicas to reply to a fast-read query and monitors the cache miss ratio to address such attacks.
**Side-channel attacks:** We consider side-channel attacks out of the scope of this paper. However, Troxy can implement existing technics to limit side-channel attacks inside an SGX enclave [35], [36], [37].
**Bypassing Troxy:** A malicious replica could bypass Troxy in order to break the safety of the system, by directly communicating with the clients. To prevent this attack the clients and Troxy initiate secure connections using the TLS protocol. The session keys are securely stored inside the Troxy, thus the malicious replica cannot forge correct messages.
**Interface attacks:** A malicious replica could attack the enclave interface in order to get access to the secrets stored inside the Troxy. As discussed in Section V, the enclave interface has been hardened to prevent such attacks.
**Denial-of-Service and flooding:** A malicious replica could decide to perform a Denial-of-Service attack, not executing the Troxy or following its protocol, or at the opposite flood the correct replicas or clients with invalid messages. In all these cases the goal of the malicious replica is to render the system not usable. Troxy can leverage existing techniques [38] to prevent such attacks.
### C. Microbenchmark
We created a microbenchmark to evaluate the full capacity of Troxy and to investigate the overhead of (1) relocating the traditional client-side library to the server side and (2) using the trusted subsystem for protection of the Troxy. A configured number of clients are created to constantly issue asynchronous requests and measure the average throughput and latency for 60 seconds. The final results are the average values of three runs. Batching is not used as it is an orthogonal approach that has independent influence to the results.
Secure socket connections are applied to the client-to-replica communication for both the baseline and Troxy, while the replica-to-replica communication keeps using plain sockets and HMACs for message authentication. Clients only connect to the leader in the baseline system, while Troxy allows connections to any replica. We created a simple service that accepts requests and generates a reply message of configurable size. Read and write requests can be distinguished by their operation types. We ran experiments in three different scenarios, where (1) write requests are totally ordered; (2) read optimizations are applied to handle read-only requests, and (3) concurrent write requests cause conflicting reads, which leads to the traditional ordering of conflicting read requests.
In addition to the local network configuration, we also simulate a wide-area network by adding $100 \pm 20$ ms (in a normal distribution model) delay to the NICs of the client machines. We consider this as the typical usage scenario of Troxy, that is, data-center-hosted services that are accessed by remote legacy clients.
1) **Totally Ordered Requests:** In this scenario, we consider write requests of different sizes: 256 B, 1 KB, 4 KB and 8 KB. The size of the reply is always 10 B. Two implementations of Troxy in C/C++ are compared against the baseline: ctroxy, running in the untrusted environment without SGX, indicates the impact of using JNI; while etroxy, running inside an enclave, adds the overhead of utilizing the trusted subsystem.
Figure 6 shows the measurement result for handling write requests in the local network. With a small request payload size (256 B), etroxy shows about 43% of performance loss due to the transitions between the trusted and untrusted environments as well as the extra steps in processing ordered requests (see Figure 5). More precisely, by considering the performance of ctroxy (without SGX), half of the performance loss in etroxy is caused by using the trusted subsystem. When the payload size increases, ctroxy and etroxy start to provide similar performance and etroxy reaches the baseline at 8 KB. This is due to the fact that authenticating messages with large payload is faster in C/C++ than it is in Java.
We also measure the performance with a network delay in between the clients and replicas. As illustrated in Figure 7, the server-side reply voter brings a huge advantage to Troxy. In this case, for each request, the clients wait for only one reply that is affected by the delay instead of $f + 1$ replies. This advantage applies to different request payload sizes, and leads to up to 60% performance gain.
2) Read Optimizations: We measure the performance of the fast-read cache using read-only requests with different payload sizes: 10 B/256 B, 10 B/1 KB, 10 B/4 KB and 10 B/8 KB for request/reply messages, respectively. The baseline system implements a PBFT-like read optimization approach [1], where read requests are directly forwarded to the followers for execution without being ordered. For read-only workloads, this approach can be very effective as there are no concurrent state transitions to create conflicts in the read results.
Figure 8 shows the results of handling read-only requests in the local network. On the one hand, with small requests (10 B), the fast message authentication cannot compensate the overhead of the server-side reply voter. The overhead with 256 B reply is as high as 115%. On the other hand, along with the increasing reply size, the effect of fast authentication becomes more visible. With 4 KB replies etroxy can already overtake the baseline, and at 8 KB we can observe about 30% throughput improvement.
The result of the measurement with a network delay is shown in Figure 9. Although the server-side reply voter adds overhead to Troxy, the extra network delay has less impact on Troxy’s performance. Compared to the baseline, with 256 B replies etroxy only incurs a 33% performance degradation with network delay, compared with 115% without network delay. In addition, as the fast-read cache only needs to transfer the hash of the reply between replicas for a fast-read operation instead of a full reply, this further reduces the authentication and transmission cost. When the reply size is above 1 KB, etroxy outperforms the baseline by at least 15%.
3) Concurrency Handling: In this scenario, 1% of write requests are generated among the reads, to introduce concurrent state transitions during fast-read operations. Due to different read optimization approaches, the 1% write workload results in different read conflict rates for the baseline and Troxy (only etroxy is evaluated in this scenario). For the baseline,
Prophecy [5], a middlebox-based approach that mimics clients towards the BFT replicas and is tailored to improve the performance of read-heavy workloads; and (3) with Troxy. Table I summarizes the three implementations regarding their read optimization approaches and consistency level.
The baseline protocol implements a PBFT-like read optimization, which optimistically executes non-ordered read requests and accepts a result as soon as \( f + 1 \) identical replies are received. In case of a failed quorum due to concurrent write operations, the client has to resend the request and ask for a regular ordering to enforce linearizability. Prophecy deploys a cache in a middlebox placed between the client and the replicas. This cache stores the results of the ordered reads to reduce the execution cost of read requests with large payloads for read-heavy applications. It requires only one reply from a randomly chosen replica to be compared with the cached result. However, it trades consistency for a higher throughput: the reply of a read operation reflects the state of the latest \textit{read}, so in the worst case it would return a stale but correct result to the client. In contrast, Troxy actively manages the fast-read cache to reflect the state changes of the latest \textit{write}, thus guaranteeing strong consistency.
For the baseline, we run JMeter on the same machine as the client-side library, and use a local socket connection for message forwarding. As for Prophecy, JMeter is running on a separate machine, and establishes a secure socket connection to the client machine where the middlebox is located. Since Troxy provides transparent access to clients, JMeter can directly connect to the replicas without any modifications. Besides that, we also run a stand-alone version of the HTTP service using Jetty (v9.4) [40] to see its original performance.
The measurements are conducted in two scenarios: in the local network and with 100 ± 20 ms network delay. The GET and POST requests are issued with a payload size of 200 B, while the response message size ranges between 4 KB and 18 KB. The average latency to execute requests nearly 50% of reads return conflicting results and have to be ordered for a second time of processing, adding substantial extra overhead to the system. As for Troxy, the fast-read cache acts in a conservative way: When it uses write requests to invalidate existing cache entries, the later read requests will be ordered to prevent conflicts. This way, the observed conflict rate goes down to 14%.
We also conducted a measurement where no optimization is applied so that all reads are ordered, to get a reference throughput of each system for comparison. Figure 10 illustrates that the overhead of having 50% read conflicts contributes to the significant performance loss of the baseline, resulting in the read optimization to only achieve half of the reference throughput. For Troxy, the 14% read conflicts also decreases performance to a point that is slightly lower than its reference throughput. Therefore, we further optimized the approach to monitor the conflict rate inside Troxy in order to ensure that once the conflict rate goes beyond a certain threshold, Troxy will automatically switch to the total-order mode where all requests will be ordered (see Section IV-B). This threshold can be learned by sampling the system to determine at which conflict rate the benefits gained by fast reads will disappear. This way, the optimized fast-read cache can guarantee the lower-bound performance in case of frequent conflicts.
D. HTTP Service
In addition to the microbenchmark, we created a simple, replicated HTTP service that handles HTTP GET and POST requests and returns the queried or modified pages as responses. Its performance is measured with the HTTP benchmarking tool Apache JMeter [39]. As we are interested in evaluating the overhead of using a BFT system and the trusted subsystem in a latency-sensitive application, we ensure that JMeter is configured not to saturate the replicas, launching 100 clients to issue a total of 500 requests per second.
We measure the performance of the HTTP service in three implementations: (1) with the baseline protocol; (2) with
\[
\begin{array}{|c|c|c|c|}
\hline
\text{Replica} & \text{Quorum} & \text{Consistency} \\
\hline
\text{BL} & 2f+1 & f+1 \text{ replicas} & \text{Strong} \\
\text{Prophecy} & 3f+1 & 1 \text{ replica + middlebox} & \text{Weak} \\
\text{Troxy} & 2f+1 & f+1 \text{ replicas} & \text{Strong} \\
\hline
\end{array}
\]
Table I
\[
\begin{array}{|c|c|c|c|}
\hline
\text{BL} & 2f+1 & f+1 \text{ replicas} & \text{Strong} \\
\text{Prophecy} & 3f+1 & 1 \text{ replica + middlebox} & \text{Weak} \\
\text{Troxy} & 2f+1 & f+1 \text{ replicas} & \text{Strong} \\
\hline
\end{array}
\]
throughput of using a BFT system and the trusted subsystem in a latency-sensitive application, we ensure that JMeter is configured not to saturate the replicas, launching 100 clients to issue a total of 500 requests per second.
We measure the performance of the HTTP service in three implementations: (1) with the baseline protocol; (2) with
\[
\begin{array}{|c|c|c|c|}
\hline
\text{Replica} & \text{Quorum} & \text{Consistency} \\
\hline
\text{BL} & 2f+1 & f+1 \text{ replicas} & \text{Strong} \\
\text{Prophecy} & 3f+1 & 1 \text{ replica + middlebox} & \text{Weak} \\
\text{Troxy} & 2f+1 & f+1 \text{ replicas} & \text{Strong} \\
\hline
\end{array}
\]
Table I
is the author’s version of the work. For personal use only, not for redistribution. The definitive version will be published in the proceedings of the 2018 48th Annual IEEE/IFIP Conference on Dependable Systems and Networks (DSN).
is reported in Figure 11. In both scenarios, the stand-alone implementation (Jetty) indicates the original performance of the service. In case of a local network, both the baseline and Troxy keep a low latency, with an overhead of at most 1.8 ms, while the two socket connections in Prophecy contribute to a latency almost twice as high. When the network delay is applied, the latency of the baseline implementation raises dramatically, as its reply voter is located on the client machine. The network delay between the client and the replicas significantly impacts the latency observed by the client. For Prophecy and Troxy, as their voters are close to the replicas (on the middlebox machine and in the fast-read cache on a replica, respectively), this extra round-trip impact is negligible. The results of this measurement show that in a wide-area network, using Troxy-backed BFT systems is beneficial for user-facing legacy applications.
VII. RELATED WORK
Traditional BFT state machine protocols consist of libraries attached to both client and server [1], [3], [4], [6], [41]. The client-side library is mainly responsible for service invocation, message transfer, and reply voting. In contrast, Troxy provides a transparent and secure connection between the client and the replicated service by leveraging trusted computing technology. The complexity of the replicated fault-tolerant system, in terms of protocol, exchanged messages, and interface is therefore hidden from the clients and legacy clients can interact with BFT services without any changes.
Troxy is not the first protocol to explore the usage of trusted subsystems in BFT systems. A2M-PBFT [15] is based on a trusted append-only log, enabling it to reduce the number of required replicas compared to traditional protocols from $3f + 1$ to $2f + 1$. TrInc [22] is a subsystem providing trusted counters that can be employed as a less complex replacement for the trusted log of A2M-PBFT. MinBFT and MinZyzyzyva [14] are two protocols that directly make use of a counter-based trusted subsystem. The most recent representative of this class of protocols is Hybster [13]. Hybster is also based on trusted counters and $2f + 1$ replicas. However, it overcomes the difficulties of other hybrid protocols such as a time-dependent memory demand and exhibits a significantly improved performance by introducing the consensus-oriented parallelization [2] into the hybrid fault model. Besides using an FPGA-based trusted subsystem, CheapBFT [21] saves resources by exploiting passive replication: $f$ out of $2f + 1$ required replicas remain passive and are activated only in case a faulty behavior is suspected. Similarly, V-PR [23] employs trusted computing technology, named XMHF/TrustVisor [42], to design a fully-passive replicated system for tolerating Byzantine failures. By leveraging a trusted subsystem, all those protocols have a lower complexity, in terms of exchanged messages and number of replicas, compared to traditional BFT protocols. Nevertheless, none of the aforementioned systems are transparent from the client’s point of view.
Prophecy [5] executes a special component between the client and the server and thus does not require modifications at the client side. As in Troxy, this component needs to be trusted and acts as a proxy by receiving the client request, collecting the replies from the replicas and sending a single reply back to the client. However, compared to Troxy, Prophecy (i) requires a large trusted computing base comprised of a middlebox, operating system, and network stack; and (ii) is not able to ensure strong consistency.
SPARE [43] is transparent to the clients by locating the reply voter on the server side. SPARE executes replicas inside virtual machines, thus requiring a virtualization layer and hypervisor, and considers a specific fault model where the replicas can exhibit Byzantine behavior; the hypervisor and reply voter fail by crashing only. The practicality of SPARE is limited by its large trusted computing base, composed of an entire hypervisor, a management operating system, and the reply voter.
Thema [44] and BFT-WS [45] extend the classic approach of having a generic client-side library and a server-side library with an additional web-service library. This library collects identical request messages from the different replicas, sends the request to a non-replicated web service, and forwards the reply back to the replicas. Thus, these works address an orthogonal problem and could be combined with Troxy.
Avoine et al. [46] present a deterministic fair exchange algorithm running in untrusted hosts with security modules. The untrusted hosts are unable to forge valid protocol messages due to the security modules comprising the entire consensus-protocol implementation. In contrast, the goal of BFT protocols such as Hybster (the protocol used by Troxy) is to keep the trusted computing base as small as possible by implementing most protocol parts in the untrusted host.
There is a growing number of systems that utilize SGX to secure computing in the context of cloud computing [31], [47], perform application level secure data processing [48], and enable trusted client-side computing and offloading [49], [50], just to name a few. To our knowledge, none of these systems have used trusted execution to enable compatibility with legacy systems as proposed by Troxy.
VIII. CONCLUSION
We have presented Troxy, a system which leverages trusted execution environments to offer clients transparent access to BFT systems. In contrast to traditional BFT systems, a Troxy-backed system does not require to execute a special library at the client side. Instead, it implements a substitute of the library inside each replica. In addition, it introduces a novel read optimization that features a managed fast-read cache to accelerate read-heavy operations while providing strong consistency guarantees. We implemented a prototype of Troxy in C/C++ with Intel SGX and evaluated its performance with both microbenchmarks and an HTTP service. The results indicate that (1) while Troxy is slower by up to 43% for small payloads, it outperforms a state-of-the-art hybrid BFT protocol by 130% for larger, read-heavy workloads and a realistic network delay; (2) Troxy introduces a negligible latency overhead and is transparent to legacy clients when providing Byzantine fault tolerance to an HTTP service.
|
{"Source-Url": "https://lsds.doc.ic.ac.uk/sites/default/files/bli2018DSN.pdf", "len_cl100k_base": 13570, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42594, "total-output-tokens": 14364, "length": "2e13", "weborganizer": {"__label__adult": 0.0004220008850097656, "__label__art_design": 0.0004224777221679687, "__label__crime_law": 0.0006132125854492188, "__label__education_jobs": 0.0005283355712890625, "__label__entertainment": 0.00015020370483398438, "__label__fashion_beauty": 0.00019216537475585935, "__label__finance_business": 0.0004763603210449219, "__label__food_dining": 0.0004253387451171875, "__label__games": 0.0008597373962402344, "__label__hardware": 0.0035648345947265625, "__label__health": 0.0007195472717285156, "__label__history": 0.0005555152893066406, "__label__home_hobbies": 0.0001062154769897461, "__label__industrial": 0.0008397102355957031, "__label__literature": 0.0003261566162109375, "__label__politics": 0.0004377365112304687, "__label__religion": 0.0006461143493652344, "__label__science_tech": 0.35400390625, "__label__social_life": 0.0001081228256225586, "__label__software": 0.0193939208984375, "__label__software_dev": 0.61376953125, "__label__sports_fitness": 0.00030875205993652344, "__label__transportation": 0.0008525848388671875, "__label__travel": 0.00029730796813964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65443, 0.03396]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65443, 0.14821]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65443, 0.91874]], "google_gemma-3-12b-it_contains_pii": [[0, 5905, false], [5905, 11472, null], [11472, 17201, null], [17201, 23190, null], [23190, 29711, null], [29711, 40328, null], [40328, 44922, null], [44922, 51233, null], [51233, 53292, null], [53292, 58770, null], [58770, 65443, null], [65443, 65443, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5905, true], [5905, 11472, null], [11472, 17201, null], [17201, 23190, null], [23190, 29711, null], [29711, 40328, null], [40328, 44922, null], [44922, 51233, null], [51233, 53292, null], [53292, 58770, null], [58770, 65443, null], [65443, 65443, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65443, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65443, null]], "pdf_page_numbers": [[0, 5905, 1], [5905, 11472, 2], [11472, 17201, 3], [17201, 23190, 4], [23190, 29711, 5], [29711, 40328, 6], [40328, 44922, 7], [44922, 51233, 8], [51233, 53292, 9], [53292, 58770, 10], [58770, 65443, 11], [65443, 65443, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65443, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
db47daeb8856a3315dedcf70905404abb8d4aed2
|
Deriving test sets from partial proofs
Guillaume Lussier, Hélène Waeselynck
LAAS-CNRS
7 Avenue du Colonel Roche
31077 Toulouse Cedex 4 - France
E-mail: {glussier,waeselyn}@laas.fr
Abstract
Proof-guided testing is intended to enhance the test design with information extracted from the argument for correctness. The target application field is the verification of fault-tolerance algorithms where a complete formal proof is not available. Ideally, testing should be focused on the pending parts of the proof. The approach is experimentally assessed using the example of a group membership protocol (GMP), a complete proof of which has been developed by others in the PVS environment. In order to obtain a partial proof example, we proceed to flaw insertion into the PVS specification. Test selection criteria are then derived from the analysis of the reconstructed (now partial) proof. Their efficiency for revealing the flaw is experimentally assessed, yielding encouraging results.
1. Introduction
Functional testing approaches usually rely on coverage measures, test purposes, or selection hypotheses associated with models of behavior. Such criteria are used to select finite test sets from the models. They always involve assumptions. For example, transition coverage assumes that flaws manifest themselves as simple output or transfer errors. Test purposes represent pieces of behavior that are deemed important to be tested. Uniformity hypotheses are used to group inputs that should be equivalent in their capability of stimulating the system under test. In this paper, we investigate whether a partial formal proof can be a useful basis for deriving such assumptions.
The target application field is the verification of Fault-Tolerance (FT) algorithms. As FT mechanisms are critical components for building dependable architectures, strong evidence for correctness of the underlying algorithms is desirable. Suppose, however, that a complete formal proof could not be obtained. Then, testing can be seen as a complementary technique to gain confidence that the algorithm should be correct, or to exhibit counter-examples under the form of test scenarios. The tested artifact is possibly a prototype of the algorithm, or a specification that can (in some way) be executed. Ideally, the design of testing should take advantage of the fact that a proof has been attempted. For example, the test size can be reduced if the algorithm requirements have formally been proved to hold for a subset of the input space. An unsuccessful proof by cases might suggest test cases that would be potentially significant to the correctness of the algorithm. Intuitively, one would expect potential flaws to be somehow related to the pending parts of the proof.
While the idea of proof-guided testing seems appealing, its feasibility and efficiency have to be studied on realistic examples. We adopt an experimental approach: starting from incomplete proofs of flawed FT algorithms, we investigate whether the proof analysis does supply useful information for guiding the design of testing.
Our previous work along these lines [4, 5] addressed testing from informal proofs, that is, paper demonstrations done by usual reasoning. Our conclusion was that such proofs may carry relevant information for testing, but this depends on their degree of rigorousness. In [4], the analysis of the proof revealed major flaws of reasoning, and proof-guided testing was unsuccessful. The proof example studied in [5] was much better crafted than the previous one (but still flawed), and allowed us to identify an input subspace that yielded a high failure rate of the algorithm. In this paper, we are now considering the case of formal, but partial, proofs.
For experimental purposes, it was easy to find in the literature examples of incorrect FT algorithms “proved” by informal demonstration. However, examples of partial formal proofs for incorrect algorithms are more difficult to get, as it is only the successful proofs that are made available in the public domain. We decided to proceed as follows: obtain a successful proof, insert a flaw into the specification of the algorithm, and then use the accordingly modified – and now partial – formal proof as a case study for proof-guided testing.
Section 2 presents the background of the example studied in this paper. The FT algorithm is a Group Membership Protocol (GMP). Its formal proof [8] has been developed in the PVS [6] environment. Section 3 gives an overview of our experimental approach. Section 4 described the GMP algorithm, its requirements, and first analysis results from a testing perspective. After a general presentation of the original proof in Section 5, we proceed to flaw insertion in Section 6. Experimental test results for the flawed algorithm are given in Section 7.
2. Background of the Case Study
In a distributed system, a group membership service allows non-faulty processors to agree on their membership and to exclude faulty ones. The studied algorithm is the membership service offered by the Time-Triggered Protocol (TTP). TTP [3] has been developed over the past twenty years at the university of Vienna, and is now commercially promoted by TTEtech. It is an integrated communication protocol for time-triggered architectures, typically used for automotive functions (brake-by-wire, steer-by-wire), or avionics ones (the communication system of the Airbus A380 cabin pressure control system will be based on TTP).
The complexity of the behavior of the group membership protocol (GMP), and its tight interactions with other TTP services, makes it difficult to formally analyze it. Several attempts were necessary before a complete formal proof could be developed.
A related GMP algorithm, proposed in [2], was first proved by detailed but informal demonstration. The authors used model-checking of an instance of the algorithm to consolidate their paper demonstration for the generic case. Unfortunately, the protocol was found flawed after publication\(^1\). This experience led one of the authors (John Rushby) to formally rework the problem, using the PVS verification system. He eventually succeeded in doing this, but had to develop an original proof method, presented in [11]. This proof method has been later reused at the University of Ulm to prove the TTP GMP. As the proof was progressing, the protocol and its PVS formalization went through successive versions [7, 8, 10]. Our experimental study is based on the last version presented in [8], for which we could obtain the PVS source files and proof scripts.
3. Experimental Approach
Given a partial proof, the proposed approach to designing test sets involves three steps.
- **High-level analysis.** The aim of the analysis is to gain an understanding of the FT algorithm and its requirements: under certain assumptions, some key properties are to be fulfilled. The assumptions include a model of the faults to be tolerated, as well as other environmental assumptions. From their identification, a definition of the algorithm’s test input domain is derived. The key properties yield a specification of the test oracle checking acceptance or rejection of the test results. The understanding of the algorithm must be sufficient to initiate development of a prototype to be tested, in case the specification environment does not offer adequate support for submitting test sets to the formal model (we had to develop such a prototype for the GMP case study).
- **Detailed analysis.** The PVS source files are thoroughly analyzed, so as to gain deep insight into the proof structure. The aim is the identification of the pending parts of the proof, which will be used to direct testing in the next step of the approach. The proof analysis can be conducted at two levels. The first level considers a macroscopic view of the proof structure in terms of intermediate lemmas. It must be understood how pending lemmas contribute to the building of a global proof of the key properties. The second level refines the previous analysis by considering the proof trees associated with each pending lemma: analysis is then performed in terms of undischarged proof sequents in the trees. Our experiments will consider both levels of analysis. It is anticipated that analysis at the sequent level be more difficult than at the lemma level: in the framework of the case study, it will be investigated to what extent the more difficult analysis allows us to improve the effectiveness of testing.
- **Proof-guided testing.** This steps consists in exploiting the results of the previous analysis, whether at the lemma or sequent level, to guide the design of testing. The identified weaknesses of the partial proof are used to determine test selection criteria, i.e. to determine functional cases to be activated during testing. Then the generation of test sets is performed following a probabilistic approach, statistical testing [12]. Statistical testing aims to compensate for the imperfect connection of common test criteria with the flaws to be revealed: the cases identified by a criterion have to be exercised several times with different random test data. In this way, there is no need for a perfect match between identified cases and revealing inputs. In our experimental framework, we evaluate the efficiency of proof-guided testing in terms of induced failure rate of the algorithm (the higher the rate, the better the efficiency), and in terms of improvement with respect to a
\(^1\)Note that we used this knowingly flawed example to support previous investigation on testing from informal proofs [5].
blind sampling profile.
Since the studied GMP has been completely proved in the PVS environment, it should be correct with respect to its key properties – provided its formal specification is accurate, which is an important problem but falls outside the scope of this paper. Hence, for this case study, there is no proof weakness toward which testing should be directed. For experimentation purpose, we propose to insert a flaw into the specification of the algorithm and then study whether the accordingly modified – and now partial – formal proof may be helpful to guide the design of testing. In practice, detailed analysis is first performed on the original PVS specification: a fine understanding of the complete proof is necessary to be able to proceed to flaw insertion (see below). After flaw insertion, detailed analysis is focused on the resulting partial proof.
The process of flaw insertion is shown in Figure 1. The inserted flaws consist of modifications of the PVS description of the algorithm. Once such a modification has been introduced, a number of lemmas become unproved, or even ill-defined. Thus, the modification has to be propagated throughout the PVS model and its proof, which involves formal reworking. The extent of formal reworking may be more or less large, depending on the inserted flaw. Definitions and lemmas directly impacted by the algorithm’s modification are first reworked. Proof tactics associated to these lemmas may also have to be adapted. Then, the modified lemmas may necessitate reworking of the general proof structure, yielding further modification. The process ends when the reworked partial proof is deemed representative of a genuine attempt to prove the modified algorithm. Note that, for practical reasons, we did not consider flaws necessitating major changes in the proof structure.
We now present the results of this experimental approach applied to the GMP example.
4. High-level analysis
4.1. Assumptions and key properties
The studied GMP involves \( n \) processors (numbered 0, ..., \( n-1 \)) attached to a broadcast bus. Execution is synchronous, with a global time \( t \) increased by one at each step. At time \( t \), processor \( t \mod n \) is the only one that can broadcast messages. This defines broadcast slots owned by this processor. Each processor maintains a local view of the membership set, i.e., the set of processors it considers non-faulty. Whenever its slot is reached, a processor will remain silent if it is no more contained in its own membership set (it has diagnosed itself as faulty). Otherwise, it sends a message including information on its local view of the membership. More precisely, it appends to the message a CRC checksum calculated over the message data and its membership set.
Only two types of faults are considered:
- **Send faults.** The broadcaster either fails to produce activity on the bus, or performs an incorrect sending of the message. Since broadcasts are assumed consistent, none of the non-faulty processors receives a correct message.
- **Receive faults.** The affected processor fails to receive a broadcast.
Once a processor has become faulty (first manifestation of a fault), it may or may not succeed in sending or receiving messages in subsequent slots. Only one non-faulty processor can become faulty in any \( 2n \) consecutive steps, and there are always at least two non-faulty processors in the system.
Under these assumptions, the GMP has to fulfill three properties at any time:
- **Validity.** Non-faulty processors should have all the non-faulty processors in their membership sets, and at most one faulty processor in their sets (as it may take some time to diagnose the fault). Faulty processors should have either removed themselves from their sets, or have a subset of the non-faulty processors plus themselves in their sets.
- **Agreement.** All non-faulty processors should have the same membership sets.
- **Self-diagnosis.** A processor that becomes faulty should diagnose its fault and remove itself from its own membership set in less than \( 2n \) steps.
4.2. Presentation of the algorithm
A detailed explanation of the GMP behavior can be found in [8]. Here, we reproduce a description of the algorithm under the form of guarded commands (guard \( \rightarrow \) action), and give a general outline of it.
Figure 2 presents the 14 guarded commands defining the behavior of a processor \( p \) at slot \( t \), according to its mode at that slot (broadcaster, receiver). In receiving mode, the current broadcaster is processor \( b \). The guards are evaluated in top-down order, and processor \( p \) executes the action corresponding to the first guard that evaluates to true. The membership set of \( p \) at time \( t \) is denoted \( \text{mem}_p^t \).
In receiving mode, the arrival (or non arrival) of a message determines the following input variables:
- **arrives\(_p^t\)\(_p\).** A Boolean variable set to true if processor \( p \) correctly receives a message at step \( t \).
- **nu\(_p^t\)\(_p\).** A Boolean variable set to true if \( p \) did not detect any traffic on the bus at step \( t \).
- **mem\(_b^t\)\(_b\).** The membership set sent by \( b \) (when arrives\(_b^t\)_b is true).
Figure 1. Flaw insertion in the PVS specification and proof
<table>
<thead>
<tr>
<th>Broadcaster:</th>
</tr>
</thead>
<tbody>
<tr>
<td>(1) $\text{acc}_p \geq \text{rec}_p \wedge$</td>
</tr>
<tr>
<td>$\text{mem}_{p+1} = \text{mem}_p \wedge$</td>
</tr>
<tr>
<td>$\text{prev}_{p+1} = T \wedge$</td>
</tr>
<tr>
<td>$\text{acc}_{p+1} = 1 \wedge$</td>
</tr>
<tr>
<td>$\text{rec}_{p+1} = 0$</td>
</tr>
<tr>
<td>(2) otherwise $\rightarrow \text{mem}_{p+1} = \text{mem}_p \setminus {p}$</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Receiver:</th>
</tr>
</thead>
<tbody>
<tr>
<td>(3) $p \not\in \text{mem}_p$ $\rightarrow$ no change</td>
</tr>
<tr>
<td>(4) $\text{prev}_b \wedge \text{arrive}_b \wedge$</td>
</tr>
<tr>
<td>$\text{mem}_b = \text{mem}_p \cup {p, b}$</td>
</tr>
<tr>
<td>(5) $\text{prev}_b \wedge \text{arrive}_b \wedge$</td>
</tr>
<tr>
<td>$\text{mem}_b = \text{mem}_p \cup {b} \setminus {p}$</td>
</tr>
<tr>
<td>(6) $\text{prev}_b \wedge \text{mul}<em>b$ $\rightarrow$ $\text{mem}</em>{p+1} = \text{mem}_p \setminus {b}$</td>
</tr>
<tr>
<td>(7) $\text{prev}<em>b$ $\rightarrow$ $\text{mem}</em>{p+1} = \text{mem}_p \setminus {b}$</td>
</tr>
<tr>
<td>(8) $\text{double}_b \wedge \text{arrive}_b \wedge$</td>
</tr>
<tr>
<td>$\text{mem}_b = \text{mem}_p \cup {p, b} \setminus {\text{succ}_p}$</td>
</tr>
<tr>
<td>(9) $\text{double}_b \wedge \text{arrive}_b \wedge$</td>
</tr>
<tr>
<td>$\text{mem}_b = \text{mem}_p \cup {\text{succ}_p, b} \setminus {p}$</td>
</tr>
<tr>
<td>(10) $\text{double}_b \wedge \text{mul}<em>b$ $\rightarrow$ $\text{mem}</em>{p+1} = \text{mem}_p \setminus {b}$</td>
</tr>
<tr>
<td>(11) $\text{double}<em>b$ $\rightarrow$ $\text{mem}</em>{p+1} = \text{mem}_p \setminus {b}$</td>
</tr>
<tr>
<td>(12) $\text{arrive}_b \wedge (\text{mem}_p = \text{mem}<em>b)$ $\rightarrow$ $\text{mem}</em>{p+1} = \text{mem}_p \wedge$</td>
</tr>
<tr>
<td>$\text{acc}_{p+1} = \text{acc}_p + 1$</td>
</tr>
<tr>
<td>(13) $\text{mul}<em>b$ $\rightarrow$ $\text{mem}</em>{p+1} = \text{mem}_p \setminus {b}$</td>
</tr>
<tr>
<td>(14) otherwise $\rightarrow$ $\text{mem}_{p+1} = \text{mem}_p \setminus {b}$</td>
</tr>
</tbody>
</table>
Figure 2. Guarded commands of the GMP (from [8] and the PVS source code)
Note that the description of the algorithm makes the conceptual assumption that a message contains the broadcaster’s local view of the membership, while it actually contains a CRC checksum. This is legitimate because the receiver \( p \) can perform a CRC check on the received message data and its own membership view. If the two checksums are the same, \( p \) can conclude (with a certain probability) that the membership views are the same, as in guard (12). In case of mismatch, \( p \) cannot directly check the identity of processors about which \( p \) and \( b \) disagree. But \( p \) can try to reconstruct \( b \)’s membership set by performing CRC calculations with certain entries of its own membership set changed, as in guards (4), (5), (8), (9).
The commands in receiving mode can be classified into four categories, depending on the internal variables appearing in their guard:
- Command (3), guarded by \( p \) not belonging anymore to its own membership set. Its state is then frozen.
- Commands (4) to (7), guarded by the \( \text{prev}_{i}^{p} \) Boolean variable, capturing the fact that \( p \) considers it was the previous non-faulty broadcaster, \( \text{prev}_{i}^{p} \) was set to true by command (1) in broadcasting mode, and is reset to false by commands (4) and (5).
- Commands (8) to (11), guarded by the \( \text{double}_{i}^{p} \) Boolean variable, true whenever \( p \) considers that it may have suffered a send fault during its previous broadcast. \( \text{double}_{i}^{p} \) is set to true by command (5). It is reset to false when \( p \) is able to conclude that it did not suffer a send fault (command (8)), or that it did suffer one (command (9)).
- Commands (12) to (14), corresponding to the standard case where \( p \) belongs to its own membership set, is not the previous broadcaster, and has no doubt on its previous broadcast.
Commands guarded by \( \text{prev}_{i}^{p} \) and \( \text{double}_{i}^{p} \) are consistent with an implicit acknowledgement mechanism, done by broadcasting membership information. Let us assume that \( p \) suffers a send fault at slot \( t \). At \( t + 1 \), it may correctly receive a message from processor \( b \), and observe that it is no more included in \( b \)’s membership set (guard (5)). It then concludes that \( b \) did not receive its message, but does not know whether it comes from \( b \) having suffered a receive fault, or from itself having suffered a send fault. By default, \( b \) is excluded from \( p \)’s membership, but the \( \text{double} \) variable of \( p \) is set to true. When a second message confirms that \( p \)’s broadcast was incorrect, \( p \) will remove itself from its own membership set.
However, this mechanism is not sufficient to ensure self-diagnosis in any case. This is why \( p \) also uses two counters, \( \text{acc} \) and \( \text{rej} \), representing the number of messages it has accepted or rejected since its previous broadcast. Generally speaking, \( \text{acc} \) is increased by one if \( p \) correctly receives a message and agrees with the membership view of the broadcaster. Counter \( \text{rej} \) is increased by one if \( p \) receives an incorrect message, or receives a correct one but disagrees with the broadcaster’s membership view. The values of the counters are periodically checked, at each broadcast slot of \( p \) (guards (1) and (2)). The values allow \( p \) to diagnose its fault if it has rejected more messages than it has accepted since its last broadcast, or if it has accepted none (\( \text{acc} \) was reset to 1 at the slot of its previous broadcast).
### 4.3. Results of the high-level analysis
At this stage of high-level analysis, we are able to define the test input domain, and the test oracle checks. We are also able to initiate the development of a prototype of the algorithm.
The detailed description of the algorithm makes it straightforward to implement a GMP prototype. The C code we developed is a quasi-literal transcription of the PVS code corresponding to Figure 2.
The test oracle is specified to check the validity, agreement and self-diagnosis properties at each step (see Section 4.1). The implementation of the checks is closely based on the PVS representation of these invariant properties. It requires that all local membership sets be observed at each step.
The definition of the test input domain is less straightforward. The identification of the GMP assumptions (briefly presented in section 4.1), required a careful analysis of all axioms extracted from the PVS specification. We found that the \( \text{null}_{p}^{t} \) inputs were under-specified. We decided to place restrictions on the situations allowed by the axioms. As an example, from the PVS axioms, nothing prevents \( \text{arrives}^{t} \) and \( \text{null}_{p}^{t} \) from being both true at the same time. This seems meaningless, as \( p \) cannot both receive a correct message and detect no traffic on the bus. This situation was not allowed in our definition of the input domain. For non-faulty receivers, or for faulty ones not manifesting their fault at step \( t \), input \( \text{null}_{p}^{t} \) is true if and only if no traffic is generated on the bus. This may correspond to one of the following cases: (1) the broadcaster decides to remain silent (because it is no more in its own membership set), (2) the broadcaster manifests a fault and fails to send anything on the bus. For faulty receivers manifesting a fault at step \( t \), \( \text{null}_{p}^{t} \) may take any value true or false: whatever the broadcaster’s behavior, the faulty receiver either receives nothing, or receives something that it cannot interpret as a correct message. Note that we do not exclude the situation where the faulty receiver wrongly detects activity on the bus while the broadcaster remains silent. This may be a debatable decision, but anyway the situation is allowed by the axioms.
\(^{2}\)The axioms ensure that it is impossible to correctly receive a message that was not correctly sent
We chose to define a test input sequence by the number \( n \) of processors, with \( n \geq 2 \) (at least two non-faulty processors), and by a list of faults affecting the system. A fault is characterized by its occurrence time, its type and the affected processor. Following the previous discussion, Figure 3 tabulates the four fault types that may affect a processor \( p \) in our test environment. The fault type must be consistent with the occurrence slot and affected processor (e.g., a send fault affects the broadcaster at that slot). Moreover, let us recall that there are constraints on the maximum number of faults \( (n - 2) \) and the temporal dispersion of faults affecting processors non-faulty so far \( (2n \text{ slots apart}) \).
At this stage, we were able to implement a crude random profile generating valid test sequences for systems from 3 to 20 processors, a range targeted by the Time-Triggered Architecture. The crude profile was implemented not only for experimental comparison with more designed profiles, but also for another reason: it allowed us to ensure that we were able to extract, from the PVS axioms, a constructive definition of the test input domain (under the form of a random generation function).
5. Detailed analysis of the GMP proof
5.1. The disjunctive invariants proof method
The GMP proof aims at establishing that the validity, agreement and self-diagnosis properties hold at any time. Usually, such invariant properties are verified by an induction proof. Since the properties to be proved are generally not inductive, they have first to be strengthened by conjoining additional properties, until an inductive invariant is obtained. This classical approach has been unsuccessful when applied to a related, and much simpler, GMP algorithm (see Section 2, mentioning this algorithm [2]). Proof attempts were defeated by the number and complexity of the auxiliary invariants, and by case explosion.
In [11], J. Rushby proposed a new method to tackle the problem. The principle is to strengthen the property of concern into a disjunction of “configurations” that can easily be proved to be inductive. The set of configurations, and transitions among configurations, have a diagrammatical representation that conveys insight into the operation of the algorithm. The complete proof of the GMP involves the following proof steps:
<table>
<thead>
<tr>
<th>type id</th>
<th>manifestation of the fault</th>
</tr>
</thead>
<tbody>
<tr>
<td>no_msg</td>
<td>the other processors ( q ) receive: ( \neg \text{arrives} \ t \ \land \neg \text{null} \ _b^v )</td>
</tr>
<tr>
<td>not_no_msg</td>
<td>the other processors ( q ) receive: ( \neg \text{arrives} \ t \ \land \text{null} \ _b^v )</td>
</tr>
<tr>
<td>null</td>
<td>whatever the broadcaster’s behavior, ( p ) receives: ( \neg \text{arrives} \ t \ \land \text{null} \ _p^v )</td>
</tr>
<tr>
<td>not_null</td>
<td>whatever the broadcaster’s behavior, ( p ) receives: ( \neg \text{arrives} \ t \ \land \neg \text{null} \ _p^v )</td>
</tr>
</tbody>
</table>
Figure 3. Fault model for the test experiments
prove that the validity and agreement properties hold in every configuration.
- prove the transition lemmas. For each transition, it is proved that starting from the source configuration at time $t$, if the transition condition holds, then the system will be in the sink configuration at time $t+1$.
- prove that there is no other configuration the system can get into. It is first proved that the stable configuration initially holds. Then, for every configuration, it is proved that the specification of transitions is complete.
- prove the self-diagnosis property. This is done by first proving that the system remains outside the stable configuration for at most $2n$ slots.
For the GMP version presented in [8], all these proof steps were successfully discharged under the PVS environment. They correspond to 348 proof obligations.
6. Flaw insertion
For experimentation purposes, we need to insert a flaw in the algorithm and obtain a partial proof.
6.1. Choice of the flaw to be inserted
A total amount of 10 candidate modifications of the algorithm were considered. We first considered a few Mutation-like [1] modifications, that consist in simple syntactic changes in the algorithm. Then we identified other candidate modifications by searching for discrepancies between: 1) the PVS source code of the algorithm and its informal presentation in [8]; 2) the current version of the algorithm and previous versions presented in [7, 10]; 3) the studied GMP and a related GMP algorithm, proposed in [2] (attempts to prove the latter algorithm led the disjunctive invariants proof method to be developed, as mentioned in Sections 2 and 5.1).
For our purpose, a candidate modification should be retained only if it possesses some desired characteristics. The modification should not yield a crudely incorrect algorithm. Still, it should correspond to a flaw: we are not interested in modifications preserving correctness with respect to the three properties of validity, agreement and self-diagnosis. Also, obtaining a realistic partial proof should not necessitate major changes in the original proof, so as to keep the effort reasonable.
Determining whether a candidate modification possesses the desired characteristics is obviously a problem. Crudely incorrect algorithms can be identified by test experiments under the blind random profile developed at the end of the high-level analysis (see section 4.3). Such preliminary experiments allowed us to eliminate five candidate modifications yielding a high failure rate. For the other modifications yielding no failure (4 modifications), or few failures (1
modification), further analysis of their characteristics had to involve formal reworking.
It turned out that we obtained a complete proof for two of the modifications for which no failure was observed. One of them corresponds of a mutation affecting command (7) of the GMP: in the action part, the incrementation of the $re\ j$ counter is suppressed (see the original command in Figure 2). This example illustrates the difficulty of understanding the semantic impact of a modification. The fact that the algorithm still works is far from intuitive.
We did not manage to complete the proof of the two other modifications yielding no failure. We were not able to determine whether the modifications preserve the three required properties, or correspond to subtle flaws. In one case, the partial proof we obtain after some formal reworking is not deemed representative of a genuine attempt to prove the modified algorithm. Our opinion is that a major change in the algorithm would be required to properly account for the algorithm’s modification. As a result, this modification is not retained. For the remaining modification yielding no failure, as well as for the one inducing a low failure rate under the crude profile, we managed to obtain meaningful partial proofs at the expense of a reasonable effort.
For first experimentation, we decided to retain the modification known to introduce a flaw (i.e., the one which failed under the crude profile). The flaw, as well as the obtained partial proof, are presented in the next section.
6.2. Retained modification of the GMP
The inserted flaw induces a low probability of failure under the crude random profile (0.6%). It consists in weakening the guard of Command (1) of the algorithm:
$$acc_p \rightarrow re\ j_p \land acc_p \geq 2$$
The second part of the guard ($acc_p \geq 2$) corresponds to one of the modifications introduced between early PVS versions of the algorithm [7, 10] and the most recent one [8]. The author of the proof identified the modification as necessary to avoid failure in case of a specific scenario. This scenario corresponds to a specific activation of path: $stable \rightarrow latent \rightarrow missed-rcv-x-not-ack \rightarrow stable$. The path is triggered by a receive fault on the most recent broadcaster $x$, and the specific activation is when $x$ fails to detect any communication at all during the next $n-1$ slots (according to our terminology, it makes $n-1$ successive $null$ receive faults).
In the correct version of the GMP, the last transition of the path is taken as $x$ becomes broadcaster again. It executes command (2), because the guard of (1) evaluates to false. In the flawed version, command (1) is executed, processor $x$ does not diagnose its fault, and the system behavior goes outside the configuration diagram. The self-diagnosis property is violated.
In order to obtain a realistic partial proof of the flawed algorithm, it is not sufficient to modify the PVS description of the algorithm. The modification has to be propagated. It has to be accounted for in other parts of the PVS specification, as well as in their proofs. A first work on definitions and lemmas allowed us to reconstruct the original proof, with the exception of three pending lemmas. The three lemmas correspond to the proofs of the three following transitions:
- $missed-rcv-x-not-ack \rightarrow stable$
- $excluded-doubt-no-2nd-succ \rightarrow stable$
- $pending-selfdiag-no-1st-succ \rightarrow stable$
After analysis, we concluded that the second transition could be proved at the expense of a minor modification of the configuration diagram. Indeed, after having strengthened the predicates defining configurations excluded, excluded-doubt, excluded-doubt-no-2nd-succ, and after having reworked all transitions proofs linked to these configurations, we managed to complete the proof of excluded-doubt-no-2nd-succ $\rightarrow$ stable.
At this stage, there are two pending transitions in the GMP proof for which no obvious solution can be found. This not surprising for the missed-rcv-x-not-ack $\rightarrow$ stable transition, as the known revealing scenario is related to it. But the proof is also unsuccessful in the case of the other transition, for which we do not have any counter-example.
In our opinion, this partial proof can be seen as a realistic attempt to prove the modified algorithm. Thus, it will be used as a basis to guide the design of testing.
7. Proof-guided testing
After a brief discussion on the principle of proof-guided testing (Section 7.1), we give experimental results corresponding to the two levels of analysis of the proof: analysis at the lemma level (Section 7.2), and analysis at the sequent level (Section 7.3).
7.1. Principle
Our approach relies on the assumption that the identification of the pending parts of the proof should supply useful information for the design of testing. More precisely, the aim is to trigger a violation of the required GMP properties, and we will try to achieve this by means of a falsification of the pending parts of the proof.
Of course, falsifying the pending parts of a proof is not necessarily a practical objective for testing. Undecidability problems, as well as the introduction of auxiliary formulas as proof artifacts, may result in pending parts that are neither controllable nor observable. In the worst case, when no
constructive information can be extracted from the proof, we are in the same situation as at the end of the high level analysis, that is:
- Violation of any one of the required properties is observable (by means of the implemented oracle checks).
- Violation of the properties is not specifically controllable. However, we are able to exhibit an input generation function such that, should a violation be possible, revealing inputs would have a non null probability of being generated (implementation of the crude random profile).
In practice, the design of testing is improved by identifying subdomains that can safely be removed from the test input domain, and by trying to define a meaningful distribution of probabilities over the remaining domain, based on the proof structure. This calls for understanding the proof structure, and for being able to establish a link between proof parts and the operational behavior of the algorithm.
The analysis can be conducted at different levels. One may simply consider the fact that two transition lemmas are pending. One may also refine the analysis and consider the details of the proof trees attempting to discharge each transition lemma.
Conducting analysis at the lemma level does not require high expertise\(^3\). It is sufficient to be able to read the PVS specification language, so as to understand the global proof structure (i.e. understand the definition of the configuration diagram). The extraction of constructive information is then facilitated by the fact that the configuration diagrams provides an operational view of the algorithm’s behavior: testing can be directed toward the activation of the two pending transition lemmas.
Conducting analysis at the sequent level is more demanding. It requires some expertise in the PVS prover, in order to understand the proof trees and analyze their pending sequents. Establishing a link between the sequents and pieces of operational behavior is also expected to be much more difficult than at the lemma level.
Both levels of analysis were considered for deriving test sets.
### 7.2. Test criterion at the lemma level
At the lemma level, the retained test criterion is the coverage of all paths stable \(\rightarrow \ldots \rightarrow\) stable that may trigger the activation of unproved transitions. Note that there are 20 feasible paths in the complete diagram; 14 of them include one the target transitions.
\(^3\)In our case, the flaw insertion process necessitated the rework of the proof. But in the case of a genuine partial proof, the tester would simply use the raw results of the proof in terms of pending lemmas
We designed a sampling profile that makes the relevant paths roughly equally likely. Actually, the profile is only an approximation of an equiprobable one. This is so because, in the PVS specification, the transition conditions cannot always be easily linked to input cases: they also depend on internal variables of the model. Hence, we only have an imperfect control of path coverage. Note that a few generated sequences may fail to activate the target transitions; however, the definition of the profile ensures that no test sequence covering the paths of interest has a null probability of selection.
The adequacy of the retained criterion was assessed by testing the flawed algorithm with a large (50,000) sample of sequences generated under this profile. We obtained the following results:
- 0.9% of the generated sequences yielded a failure of the algorithm.
- Like in the crude random profile, all failures corresponded to a violation of the self-diagnosis property.
Let us recall that under the crude random profile, the failure rate is 0.6%.
Two conclusions can be drawn from the analysis of the results:
- Strictly speaking, the information extracted from the proof is not irrelevant for revealing the flaw. Whatever the profile (including the crude one), all observed failures correspond to sequences that do activate the target transitions.
- While not irrelevant, the information is still quite imperfect. In particular, deterministic selection of one test sequence for each of the 14 paths of interest would yield a low probability of revealing the flaw. As regards statistical testing, the designed profile only supplies a modest improvement with respect to the blind one. A sample of 345 sequences is required to get a 0.95 probability of revealing the flaw.
We now consider the detailed analysis of the proof trees to refine the test criterion.
### 7.3. Test criterion at the proof tree level
Both transition proofs face a similar problem. In the proof trees, there are undischarged sequents corresponding to the proof of goal \(\text{acc}_{\text{r}} \leq \text{re}_{\text{f}}\) under hypothesis \(\text{acc}_{\text{r}} = 1\). The definitions of configurations missed-rcv-x-not-ack and pending-selfdiag-no-1st-success ensure that \(\text{acc}_{\text{r}} = 1\) holds. Hence, we should try to falsify the pending sequents with \(\text{re}_{\text{f}} = 0\).
This objective has to put in the form of operational test sequences. We managed to do this by referring to the algorithm’s specification. We know that the \(\text{re}_{\text{f}}\) counter was
reset to zero by command (1) at the last broadcast of $x$. It is thus required that, since its last broadcast, $x$ has never executed a command incrementing its $rev$ counter. Looking at the algorithm, we can conclude that:
- $x$ has never executed commands (5), (7), (11), (14) since its last broadcast.
- Since $acc = 1$ in the target configurations, it also has never executed commands (4), (8), (9), (12).
- It also has never executed commands (3), because in the target configurations $x$ is specified to be included in its own membership set.
Hence, the only commands $x$ has executed are commands (6), (10) or (13), which means that $x$ has never detected activity on the bus since its last broadcast (the $null$ input is true at each step).
The selection criteria derived from the lemma analysis is then refined, by removing the paths that do not conform to this requirement, and by restricting the input subspaces of the remaining paths: they will be covered by test sequences with suffix including only $null$ receive faults (the size of the suffix depends on the selected path).
Under this profile, a sample of 50,000 random sequences was generated. It supplies the following results:
- The failure rate is now 98.7%.
- As previously, all failures correspond to a violation of the self-diagnosis property.
- The non-revealing sequences correspond to the few sequences failing to activate the paths of interest, and result from the imperfect control we have on path coverage (like in the previous profile).
The detailed analysis at the sequent level allowed us to focus testing on revealing subdomains. We have a perfect connection of the selection criterion with the flaw residing in the algorithm. Note that revealing sequences for transition $\text{missed-rcv-x-not-ack} \rightarrow \text{stable}$ are similar to the scenario already identified by H. Pfeifer (see Section 6.2). To the best of our knowledge, fault scenarios for transition $\text{pending-selfdiag-no-1st-succ} \rightarrow \text{stable}$ are new. These scenarios are longer than the previous one. We claim that they would have been difficult to invent by hand analysis. In [8], the author of the proof mentioned that the GMP actually removes faulty processors more quickly than the proved bound ($2n - 1$ steps). He conjectured that the actual bound should roughly be one and a half round (a round is $n$ steps). But for some of the revealing test sequences we generated, self-diagnosis requires $2n - 2$ steps on the correct version of the algorithm.
8. Conclusion
The results show that proof-guided testing can be very effective for revealing flaws in the case of a partial proof. It can be a pragmatic approach in order to exhibit counter-examples in cases where model-checking would be difficult to apply (as for the GMP example, see the proof vs. model-checking discussion in [11]). In this way, the effort that was put into the proof development is not lost, and testing is directed to the revealing of flaws that were not caught by the partial proof.
However, a deep analysis of the proof may be required. From our experience, detailed analysis at the sequent level represents a significant effort. We recommend that selection criteria based on lemma analysis be first tried. The sequent analysis should be performed only if large samples of random sequences fail to reveal a flaw. In this case, in order to be able to refine the test criterion, it is necessary either to have detailed documentation of the proof (as we had in [8]) or to work in close collaboration with the persons having developed the proof.
We are aware that our results need to be consolidated by further experimentation. We are currently studying other examples of modifications of the GMP algorithm (yielding flaws). But more importantly, there will be a need for experimenting with other examples of proof approaches for realistic FT algorithms.
The proof approach used for the GMP, based on a diagram configuration, turned out to be very adequate from the perspective of testing. Since the proof structure is based on an operational view of the algorithm’s behavior, it was possible to establish a link between pending parts of the proof and functional cases for the algorithm. For other proof approaches, e.g. more traditional proofs through invariant strengthening, establishing such a link might be more difficult.
Acknowledgement
We would like to thank Holger Pfeifer very much. He kindly accepted to send us his PVS source files, as well as his Thesis ([8]) chapter describing his formal specification and proof of the GMP.
References
|
{"Source-Url": "http://homepages.laas.fr/waeselyn/papers/ISSRE2004.pdf", "len_cl100k_base": 9854, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38217, "total-output-tokens": 11203, "length": "2e13", "weborganizer": {"__label__adult": 0.0004658699035644531, "__label__art_design": 0.0004808902740478515, "__label__crime_law": 0.0004863739013671875, "__label__education_jobs": 0.001068115234375, "__label__entertainment": 0.0001024007797241211, "__label__fashion_beauty": 0.00023305416107177737, "__label__finance_business": 0.00033855438232421875, "__label__food_dining": 0.0004725456237792969, "__label__games": 0.0010423660278320312, "__label__hardware": 0.00345611572265625, "__label__health": 0.001026153564453125, "__label__history": 0.000499725341796875, "__label__home_hobbies": 0.00016176700592041016, "__label__industrial": 0.0009126663208007812, "__label__literature": 0.0004811286926269531, "__label__politics": 0.00034689903259277344, "__label__religion": 0.000789642333984375, "__label__science_tech": 0.2203369140625, "__label__social_life": 0.00011271238327026369, "__label__software": 0.00823211669921875, "__label__software_dev": 0.75732421875, "__label__sports_fitness": 0.0004055500030517578, "__label__transportation": 0.0011148452758789062, "__label__travel": 0.00027060508728027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46387, 0.01778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46387, 0.30234]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46387, 0.92174]], "google_gemma-3-12b-it_contains_pii": [[0, 4286, false], [4286, 9640, null], [9640, 14901, null], [14901, 16723, null], [16723, 22794, null], [22794, 25929, null], [25929, 28550, null], [28550, 33950, null], [33950, 39163, null], [39163, 44090, null], [44090, 46387, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4286, true], [4286, 9640, null], [9640, 14901, null], [14901, 16723, null], [16723, 22794, null], [22794, 25929, null], [25929, 28550, null], [28550, 33950, null], [33950, 39163, null], [39163, 44090, null], [44090, 46387, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46387, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46387, null]], "pdf_page_numbers": [[0, 4286, 1], [4286, 9640, 2], [9640, 14901, 3], [14901, 16723, 4], [16723, 22794, 5], [22794, 25929, 6], [25929, 28550, 7], [28550, 33950, 8], [33950, 39163, 9], [39163, 44090, 10], [44090, 46387, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46387, 0.1746]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
ab63f6ea8860549104bc4878d54128939f80f5cd
|
Syntactic sugar, and computing every function
“[In 1951] I had a running compiler and nobody would touch it because, they carefully told me, computers could only do arithmetic; they could not do programs.”, Grace Murray Hopper, 1986.
“Syntactic sugar causes cancer of the semicolon.”, Alan Perlis, 1982.
The computational models we considered thus far are as “bare bones” as they come. For example, our NAND-CIRC “programming language” has only the single operation $\text{foo} = \text{NAND}(\text{bar}, \text{blah})$. In this chapter we will see that these simple models are actually equivalent to more sophisticated ones. The key observation is that we can implement more complex features using our basic building blocks, and then use these new features themselves as building blocks for even more sophisticated features. This is known as “syntactic sugar” in the field of programming language design since we are not modifying the underlying programming model itself, but rather we merely implement new features by syntactically transforming a program that uses such features into one that doesn’t.
This chapter provides a “toolkit” that can be used to show that many functions can be computed by NAND-CIRC programs, and hence also by Boolean circuits. We will also use this toolkit to prove a fundamental theorem: every finite function $f : \{0, 1\}^n \rightarrow \{0, 1\}^m$ can be computed by a Boolean circuit, see Theorem 4.13 below. While the syntactic sugar toolkit is important in its own right, Theorem 4.13 can also be proven directly without using this toolkit. We present this alternative proof in Section 4.5. See Fig. 4.1 for an outline of the results of this chapter.
This chapter: A non-mathy overview
In this chapter, we will see our first major result: every finite function can be computed by some Boolean circuit (see Theorem 4.13 and Big Idea 5). This is sometimes known as
Figure 4.1: An outline of the results of this chapter. In Section 4.1 we give a toolkit of “syntactic sugar” transformations showing how to implement features such as programmer-defined functions and conditional statements in NAND-CIRC. We use these tools in Section 4.3 to give a NAND-CIRC program (or alternatively a Boolean circuit) to compute the \textsc{lookup} function. We then build on this result to show in Section 4.4 that NAND-CIRC programs (or equivalently, Boolean circuits) can compute every finite function. An alternative direct proof of the same result is given in Section 4.5.
Despite being an important result, Theorem 4.13 is actually not that hard to prove. Section 4.5 presents a relatively simple direct proof of this result. However, in Section 4.1 and Section 4.3 we derive this result using the concept of “syntactic sugar” (see Big Idea 4). This is an important concept for programming languages theory and practice. The idea behind “syntactic sugar” is that we can extend a programming language by implementing advanced features from its basic components. For example, we can take the AON-CIRC and NAND-CIRC programming languages we saw in Chapter 3, and extend them to achieve features such as user-defined functions (e.g., \texttt{def Foo(...)}), conditional statements (e.g., \texttt{if blah ...}), and more. Once we have these features, it is not that hard to show that we can take the “truth table” (table of all inputs and outputs) of any function, and use that to create an AON-CIRC or NAND-CIRC program that maps each input to its corresponding output.
We will also get our first glimpse of \textit{quantitative measures} in this chapter. While Theorem 4.13 tells us that every function can be computed by \textit{some} circuit, the number of gates in this circuit can be exponentially large. (We are not using here “exponentially” as some colloquial term for “very very big” but in a very precise mathematical sense, which also happens to coincide with being very very big.) It turns out that \textit{some functions} (for example, integer addition and multi-
syntactic sugar, and computing every function. We will explore this issue of “gate complexity” more deeply in Chapter 5 and following chapters.
### 4.1 SOME EXAMPLES OF SYNTACTIC SUGAR
We now present some examples of “syntactic sugar” transformations that we can use in constructing straightline programs or circuits. We focus on the straight-line programming language view of our computational models, and specifically (for the sake of concreteness) on the NAND-CIRC programming language. This is convenient because many of the syntactic sugar transformations we present are easiest to think about in terms of applying “search and replace” operations to the source code of a program. However, by Theorem 3.19, all of our results hold equally well for circuits, whether ones using NAND gates or Boolean circuits that use the AND, OR, and NOT operations. Enumerating the examples of such syntactic sugar transformations can be a little tedious, but we do it for two reasons:
1. To convince you that despite their seeming simplicity and limitations, simple models such as Boolean circuits or the NAND-CIRC programming language are actually quite powerful.
2. So you can realize how lucky you are to be taking a theory of computation course and not a compilers course...
#### 4.1.1 User-defined procedures
One staple of almost any programming language is the ability to define and then execute procedures or subroutines. (These are often known as functions in some programming languages, but we prefer the name procedures to avoid confusion with the function that a program computes.) The NAND-CIRC programming language does not have this mechanism built in. However, we can achieve the same effect using the time-honored technique of “copy and paste”. Specifically, we can replace code which defines a procedure such as
```python
def Proc(a, b):
proc_code
return c
some_code
f = Proc(d, e)
some_more_code
```
with the following code where we “paste” the code of Proc
```python
some_code
```
and where proc_code' is obtained by replacing all occurrences of a with d, b with e, and c with f. When doing that we will need to ensure that all other variables appearing in proc_code' don’t interfere with other variables. We can always do so by renaming variables to new names that were not used before. The above reasoning leads to the proof of the following theorem:
**Theorem 4.1** — Procedure definition syntactic sugar. Let NAND-CIRC-PROC be the programming language NAND-CIRC augmented with the syntax above for defining procedures. Then for every NAND-CIRC-PROC program $P$, there exists a standard (i.e., “sugar-free”) NAND-CIRC program $P'$ that computes the same function as $P$.
**Remark 4.2** — No recursive procedure. NAND-CIRC-PROC only allows non-recursive procedures. In particular, the code of a procedure Proc cannot call Proc but only use procedures that were defined before it. Without this restriction, the above “search and replace” procedure might never terminate and Theorem 4.1 would not be true.
**Theorem 4.1** can be proven using the transformation above, but since the formal proof is somewhat long and tedious, we omit it here.
**Example 4.3** — Computing Majority from NAND using syntactic sugar. Procedures allow us to express NAND-CIRC programs much more cleanly and succinctly. For example, because we can compute AND, OR, and NOT using NANDs, we can compute the *Majority* function as follows:
```python
def NOT(a):
return NAND(a,a)
def AND(a,b):
temp = NAND(a,b)
return NOT(temp)
def OR(a,b):
temp1 = NOT(a)
temp2 = NOT(b)
```
return NAND(temp1,temp2)
def MAJ(a,b,c):
and1 = AND(a,b)
and2 = AND(a,c)
and3 = AND(b,c)
or1 = OR(and1,and2)
return OR(or1,and3)
print(MAJ(0,1,1))
# 1
Fig. 4.2 presents the “sugar-free” NAND-CIRC program (and the corresponding circuit) that is obtained by “expanding out” this program, replacing the calls to procedures with their definitions.
Big Idea 4 Once we show that a computational model $X$ is equivalent to a model that has feature $Y$, we can assume we have $Y$ when showing that a function $f$ is computable by $X$.
Remark 4.4 — Counting lines. While we can use syntactic sugar to present NAND-CIRC programs in more readable ways, we did not change the definition of the language itself. Therefore, whenever we say that some function $f$ has an $s$-line NAND-CIRC program we mean a standard “sugar-free” NAND-CIRC program, where all syntactic sugar has been expanded out. For example, the program of Example 4.3 is a 12-line program for computing the MAJ function.
even though it can be written in fewer lines using NAND-CIRC-PROC.
4.1.2 Proof by Python (optional)
We can write a Python program that implements the proof of Theorem 4.1. This is a Python program that takes a NAND-CIRC-PROC program $P$ that includes procedure definitions and uses simple “search and replace” to transform $P$ into a standard (i.e., “sugar-free”) NAND-CIRC program $P'$ that computes the same function as $P$ without using any procedures. The idea is simple: if the program $P$ contains a definition of a procedure $\text{Proc}$ of two arguments $x$ and $y$, then whenever we see a line of the form $\text{foo} = \text{Proc}(\text{bar}, \text{blah})$, we can replace this line by:
1. The body of the procedure $\text{Proc}$ (replacing all occurrences of $x$ and $y$ with $\text{bar}$ and $\text{blah}$ respectively).
2. A line $\text{foo} = \text{exp}$, where $\text{exp}$ is the expression following the return statement in the definition of the procedure $\text{Proc}$.
To make this more robust we add a prefix to the internal variables used by $\text{Proc}$ to ensure they don’t conflict with the variables of $P$; for simplicity we ignore this issue in the code below though it can be easily added.
The code of the Python function desugar below achieves such a transformation.
Fig. 4.2 shows the result of applying desugar to the program of Example 4.3 that uses syntactic sugar to compute the Majority function. Specifically, we first apply desugar to remove usage of the OR function, then apply it to remove usage of the AND function, and finally apply it a third time to remove usage of the NOT function.
Remark 4.5 — Parsing function definitions (optional). The function desugar in Fig. 4.3 assumes that it is given the procedure already split up into its name, arguments, and body. It is not crucial for our purposes to describe precisely how to scan a definition and split it up into these components, but in case you are curious, it can be achieved in Python via the following code:
```python
def parse_func(code):
"""Parse a function definition into name, arguments and body""
lines = [l.strip() for l in code.split('n')]
regexp = r'def\s+([a-zA-Z_0-9]+)\((\[a-zA-Z0-9_,\]+)\)\s*:\s*'
```
Figure 4.3: Python code for transforming NAND-CIRC-PROC programs into standard sugar-free NAND-CIRC programs.
```python
def desugar(code, func_name, func_args, func_body):
"""
Replaces all occurrences of
foo = func_name(func_args)
with
func_body[x->a,y->b]
foo = [result returned in func_body]
"""
# Uses Python regular expressions to simplify the search and replace,
# see https://docs.python.org/3/library/re.html and Chapter 9 of the book
# regular expression for capturing a list of variable names separated by commas
arglist = ",".join([r"([a-zA-Z0-9\[\]\_\(\)\)]+)" for i in range(len(func_args))])
# regular expression for capturing a statement of the form
# "variable = func_name(arguments)"
regexp = fr"([a-zA-Z0-9\[\]\_\(\)\)]+\s*\=\s*{func_name}\(\{arglist\}\)\s*\="
while True:
m = re.search(regexp, code, re.MULTILINE)
if not m: break
newcode = func_body
# replace function arguments by the variables from the function invocation
for i in range(len(func_args)):
newcode = newcode.replace(func_args[i], m.group(i+2))
# Splice the new code inside
newcode = newcode.replace('return', m.group(1) + " = ")
code = code[:m.start()] + newcode + code[m.end()+1:]
return code
```
4.1.3 Conditional statements
Another sorely missing feature in NAND-CIRC is a conditional statement such as the if/then constructs that are found in many programming languages. However, using procedures, we can obtain an ersatz if/then construct. First we can compute the function $\text{IF} : \{0, 1\}^3 \rightarrow \{0, 1\}$ such that $\text{IF}(a, b, c)$ equals $b$ if $a = 1$ and $c$ if $a = 0$.
Before reading onward, try to see how you could compute the IF function using NAND’s. Once you do that, see how you can use that to emulate if/then types of constructs.
The IF function can be implemented from NANDs as follows (see Exercise 4.2):
```python
def IF(cond, a, b):
notcond = NAND(cond, cond)
temp = NAND(b, notcond)
temp1 = NAND(a, cond)
return NAND(temp, temp1)
```
The IF function is also known as a multiplexing function, since $\text{cond}$ can be thought of as a switch that controls whether the output is connected to $a$ or $b$. Once we have a procedure for computing the IF function, we can implement conditionals in NAND. The idea is that we replace code of the form
```python
if (condition): assign blah to variable foo
```
with code of the form
```python
foo = IF(condition, blah, foo)
```
that assigns to $foo$ its old value when $condition$ equals $0$, and assign to $foo$ the value of $blah$ otherwise. More generally we can replace code of the form
```python
if (cond):
a = ...
b = ...
c = ...
```
with code of the form
```python
temp_a = ...
temp_b = ...
temp_c = ...
a = IF(cond, temp_a, a)
b = IF(cond, temp_b, b)
c = IF(cond, temp_c, c)
```
Using such transformations, we can prove the following theorem. Once again we omit the (not too insightful) full formal proof, though see Section 4.1.2 for some hints on how to obtain it.
**Theorem 4.6 — Conditional statements syntactic sugar.** Let NAND-CIRC-IF be the programming language NAND-CIRC augmented with `if/then/else` statements for allowing code to be conditionally executed based on whether a variable is equal to 0 or 1.
Then for every NAND-CIRC-IF program \( P \), there exists a standard (i.e., “sugar-free”) NAND-CIRC program \( P' \) that computes the same function as \( P \).
### 4.2 EXTENDED EXAMPLE: ADDITION AND MULTIPLICATION (OPTIONAL)
Using “syntactic sugar”, we can write the integer addition function as follows:
```python
# Add two n-bit integers
# Use LSB first notation for simplicity
def ADD(A, B):
Result = [0] * (n+1)
Carry = [0] * (n+1)
Carry[0] = zero(A[0])
for i in range(n):
Result[i] = XOR(Carry[i], XOR(A[i], B[i]))
Carry[i+1] = MAJ(Carry[i], A[i], B[i])
Result[n] = Carry[n]
return Result
ADD([1, 1, 1, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0]);
```
where `zero` is the constant zero function, and `MAJ` and `XOR` correspond to the majority and XOR functions respectively. While we use Python syntax for convenience, in this example \( n \) is some fixed integer and so for every such \( n \), `ADD` is a finite function that takes as input \( 2^n \)
bits and outputs \( n + 1 \) bits. In particular for every \( n \) we can remove the loop construct `for i in range(n)` by simply repeating the code \( n \) times, replacing the value of `i` with 0, 1, 2, \ldots, \( n - 1 \). By expanding out all the features, for every value of \( n \) we can translate the above program into a standard (“sugar-free”) NAND-CIRC program. Fig. 4.4 depicts what we get for \( n = 2 \).
By going through the above program carefully and accounting for the number of gates, we can see that it yields a proof of the following theorem (see also Fig. 4.5):
**Theorem 4.7 — Addition using NAND-CIRC programs.** For every \( n \in \mathbb{N} \), let \( ADD_n : \{0, 1\}^{2n} \to \{0, 1\}^{n+1} \) be the function that, given \( x, x' \in \{0, 1\}^n \) computes the representation of the sum of the numbers that \( x \) and \( x' \) represent. Then there is a constant \( c \leq 30 \) such that for every \( n \) there is a NAND-CIRC program of at most \( cn \) lines computing \( ADD_n \).
Once we have addition, we can use the grade-school algorithm to obtain multiplication as well, thus obtaining the following theorem:
**Theorem 4.8 — Multiplication using NAND-CIRC programs.** For every \( n \), let \( MULT_n : \{0, 1\}^{2n} \to \{0, 1\}^{2n} \) be the function that, given \( x, x' \in \{0, 1\}^n \) computes the representation of the product of the numbers that \( x \) and \( x' \) represent. Then there is a constant \( c \) such that for every \( n \), there is a NAND-CIRC program of at most \( cn^2 \) lines that computes the function \( MULT_n \).
We omit the proof, though in Exercise 4.7 we ask you to supply a “constructive proof” in the form of a program (in your favorite
---
1 The value of \( c \) can be improved to 9, see Exercise 4.5.
programming language) that on input a number \( n \), outputs the code of a NAND-CIRC program of at most \( 1000n^2 \) lines that computes the \( \text{MULT}_n \) function. In fact, we can use Karatsuba’s algorithm to show that there is a NAND-CIRC program of \( O(n \log^3 2) \) lines to compute \( \text{MULT}_n \) (and can get even further asymptotic improvements using better algorithms).
### 4.3 THE LOOKUP FUNCTION
The \( \text{LOOKUP} \) function will play an important role in this chapter and later. It is defined as follows:
**Definition 4.9 — Lookup function.** For every \( k \), the lookup function of order \( k \), \( \text{LOOKUP}_k : \{0, 1\}^{2^k+k} \to \{0, 1\} \) is defined as follows: For every \( x \in \{0, 1\}^{2^k} \) and \( i \in \{0, 1\}^k \),
\[
\text{LOOKUP}_k(x, i) = x_i
\]
where \( x_i \) denotes the \( i \)th entry of \( x \), using the binary representation to identify \( i \) with a number in \( \{0, \ldots, 2^k - 1\} \).
See Fig. 4.6 for an illustration of the \( \text{LOOKUP} \) function. It turns out that for every \( k \), we can compute \( \text{LOOKUP}_k \) using a NAND-CIRC program:
**Theorem 4.10 — Lookup function.** For every \( k > 0 \), there is a NAND-CIRC program that computes the function \( \text{LOOKUP}_k : \{0, 1\}^{2^k+k} \to \{0, 1\} \). Moreover, the number of lines in this program is at most \( 4 \cdot 2^k \).
An immediate corollary of Theorem 4.10 is that for every \( k > 0 \), \( \text{LOOKUP}_k \) can be computed by a Boolean circuit (with AND, OR and NOT gates) of at most \( 8 \cdot 2^k \) gates.
#### 4.3.1 Constructing a NAND-CIRC program for \( \text{LOOKUP} \)
We prove Theorem 4.10 by induction. For the case \( k = 1 \), \( \text{LOOKUP}_1 \) maps \( (x_0, x_1, i) \in \{0, 1\}^3 \) to \( x_i \). In other words, if \( i = 0 \) then it outputs...
and otherwise it outputs \(x_1\), which (up to reordering variables) is the same as the \textit{IF} function presented in Section 4.1.3, which can be computed by a 4-line NAND-CIRC program.
As a warm-up for the case of general \(k\), let us consider the case of \(k = 2\). Given input \(x = (x_0, x_1, x_2, x_3)\) for \textit{LOOKUP}_2 and an index \(i = (i_0, i_1)\), if the most significant bit \(i_0\) of the index is 0 then \textit{LOOKUP}_2\((x, i)\) will equal \(x_0\) if \(i_1 = 0\) and equal \(x_1\) if \(i_1 = 1\). Similarly, if the most significant bit \(i_0\) is 1 then \textit{LOOKUP}_2\((x, i)\) will equal \(x_2\) if \(i_1 = 0\) and will equal \(x_3\) if \(i_1 = 1\). Another way to say this is that we can write \textit{LOOKUP}_2\((x, i)\) as follows:
```python
def LOOKUP2(X[0], X[1], X[2], X[3], i[0], i[1]):
if i[0] == 1:
return LOOKUP1(X[2], X[3], i[1])
else:
return LOOKUP1(X[0], X[1], i[1])
```
or in other words,
```python
def LOOKUP2(X[0], X[1], X[2], X[3], i[0], i[1]):
a = LOOKUP1(X[2], X[3], i[1])
b = LOOKUP1(X[0], X[1], i[1])
return IF( i[0], a, b)
```
More generally, as shown in the following lemma, we can compute \textit{LOOKUP}_k using two invocations of \textit{LOOKUP}_{k-1} and one invocation of \textit{IF}:
**Lemma 4.11 — Lookup recursion.** For every \(k \geq 2\), \textit{LOOKUP}_k\((x_0, \ldots, x_{2^{k-1}-1}, i_0, \ldots, i_{k-1})\) is equal to
\[
\text{IF}(i_0, \text{LOOKUP}_{k-1}(x_{2^{k-1}}, \ldots, x_{2^k-1}, i_1, \ldots, i_{k-1}), \text{LOOKUP}_{k-1}(x_0, \ldots, x_{2^{k-1}-1}, i_1, \ldots, i_{k-1}))
\]
**Proof.** If the most significant bit \(i_0\) of \(i\) is zero, then the index \(i\) is in \(\{0, \ldots, 2^{k-1} - 1\}\) and hence we can perform the lookup on the “first half” of \(x\) and the result of \textit{LOOKUP}_k\((x, i)\) will be the same as \(a = \text{LOOKUP}_{k-1}(x_0, \ldots, x_{2^{k-1}-1}, i_1, \ldots, i_{k-1})\). On the other hand, if this most significant bit \(i_0\) is equal to 1, then the index is in \(\{2^{k-1}, \ldots, 2^k - 1\}\), in which case the result of \textit{LOOKUP}_k\((x, i)\) is the same as \(b = \text{LOOKUP}_{k-1}(x_{2^{k-1}}, \ldots, x_{2^k-1}, i_1, \ldots, i_{k-1})\). Thus we can compute \textit{LOOKUP}_k\((x, i)\) by first computing \(a\) and \(b\) and then outputting \(\text{IF}(i_0, b, a)\).
**Proof of Theorem 4.10 from Lemma 4.11.** Now that we have Lemma 4.11, we can complete the proof of Theorem 4.10. We will prove by induction on \(k\) that there is a NAND-CIRC program of at most \(4 \cdot (2^k - 1)\)
lines for $LOOKUP_k$. For $k = 1$ this follows by the four line program for $IF$ we’ve seen before. For $k > 1$, we use the following pseudocode:
```
a = LOOKUP_{k-1}(X[0], \ldots, X[2^{k-1}], i[1], \ldots, i[k-1])
b = LOOKUP_{k-1}(X[2^{k-1}], \ldots, X[2^{k-1}], i[1], \ldots, i[k-1])
```
```
return IF(i[0], b, a)
```
If we let $L(k)$ be the number of lines required for $LOOKUP_k$, then the above pseudo-code shows that
$$L(k) \leq 2L(k-1) + 4.$$ \hfill (4.1)
Since under our induction hypothesis $L(k-1) \leq 4(2^{k-1} - 1)$, we get that $L(k) \leq 2 \cdot 4(2^{k-1} - 1) + 4 = 4(2^k - 1)$ which is what we wanted to prove. See Fig. 4.7 for a plot of the actual number of lines in our implementation of $LOOKUP_k$.
### 4.4 Computing Every Function
At this point we know the following facts about NAND-CIRC programs (and so equivalently about Boolean circuits and our other equivalent models):
1. They can compute at least some non-trivial functions.
2. Coming up with NAND-CIRC programs for various functions is a very tedious task.
Thus I would not blame the reader if they were not particularly looking forward to a long sequence of examples of functions that can be computed by NAND-CIRC programs. However, it turns out we are not going to need this, as we can show in one fell swoop that NAND-CIRC programs can compute every finite function:
**Theorem 4.12 — Universality of NAND.** There exists some constant $c > 0$ such that for every $n, m > 0$ and function $f : \{0, 1\}^n \to \{0, 1\}^m$, there is a NAND-CIRC program with at most $c \cdot m2^n$ lines that computes the function $f$.
By Theorem 3.19, the models of NAND circuits, NAND-CIRC programs, AON-CIRC programs, and Boolean circuits, are all equivalent to one another, and hence Theorem 4.12 holds for all these models. In particular, the following theorem is equivalent to Theorem 4.12:
**Theorem 4.13 — Universality of Boolean circuits.** There exists some constant $c > 0$ such that for every $n, m > 0$ and function
In case you are curious, this is the function on input \( i \in \{0, 1\} \), which we interpret as a number in \([16]\), that outputs the \( i \)-th digit of \( \pi \) in the binary basis.
\[
f : \{0, 1\}^n \to \{0, 1\}^m,
\]
there is a Boolean circuit with at most \( c \cdot m2^n \) gates that computes the function \( f \).
**Big Idea 5** Every finite function can be computed by a large enough Boolean circuit.
**Improved bounds.** Though it will not be of great importance to us, it is possible to improve on the proof of Theorem 4.12 and shave an extra factor of \( n \), as well as optimize the constant \( c \), and so prove that for every \( \epsilon > 0 \), \( m \in \mathbb{N} \) and sufficiently large \( n \), if \( f : \{0, 1\}^n \to \{0, 1\}^m \) then \( f \) can be computed by a NAND circuit of at most \( (1 + \epsilon)\frac{m \cdot 2^n}{n} \) gates. The proof of this result is beyond the scope of this book, but we do discuss how to obtain a bound of the form \( O(\frac{m \cdot 2^n}{n}) \) in Section 4.4.2; see also the biographical notes.
### 4.4.1 Proof of NAND’s Universality
To prove Theorem 4.12, we need to give a NAND circuit, or equivalently a NAND-CIRC program, for every possible function. We will restrict our attention to the case of Boolean functions (i.e., \( m = 1 \)). Exercise 4.9 asks you to extend the proof for all values of \( m \). A function \( F : \{0, 1\}^n \to \{0, 1\} \) can be specified by a table of its values for each one of the \( 2^n \) inputs. For example, the table below describes one particular function \( G : \{0, 1\}^4 \to \{0, 1\} \).
**Table 4.1:** An example of a function \( G : \{0, 1\}^4 \to \{0, 1\} \).
<table>
<thead>
<tr>
<th>Input ((x))</th>
<th>Output ((G(x)))</th>
</tr>
</thead>
<tbody>
<tr>
<td>0000</td>
<td>1</td>
</tr>
<tr>
<td>0001</td>
<td>1</td>
</tr>
<tr>
<td>0010</td>
<td>0</td>
</tr>
<tr>
<td>0011</td>
<td>0</td>
</tr>
<tr>
<td>0100</td>
<td>1</td>
</tr>
<tr>
<td>0101</td>
<td>0</td>
</tr>
<tr>
<td>0110</td>
<td>0</td>
</tr>
<tr>
<td>0111</td>
<td>1</td>
</tr>
<tr>
<td>1000</td>
<td>0</td>
</tr>
<tr>
<td>1001</td>
<td>0</td>
</tr>
<tr>
<td>1010</td>
<td>0</td>
</tr>
<tr>
<td>1011</td>
<td>0</td>
</tr>
<tr>
<td>1100</td>
<td>1</td>
</tr>
<tr>
<td>1101</td>
<td>1</td>
</tr>
<tr>
<td>1110</td>
<td>1</td>
</tr>
<tr>
<td>1111</td>
<td>1</td>
</tr>
</tbody>
</table>
\(^2\) In case you are curious, this is the function on input \( i \in \{0, 1\}^4 \) (which we interpret as a number in \([16]\)), that outputs the \( i \)-th digit of \( \pi \) in the binary basis.
For every $x \in \{0, 1\}^4$, $G(x) = \text{LOOKUP}_4(1100100100001111, x)$, and so the following is NAND-CIRC “pseudocode” to compute $G$ using syntactic sugar for the \text{LOOKUP}_4 procedure.
\[
\begin{align*}
G0000 &= 1 \\
G1000 &= 1 \\
G0100 &= 0 \\
&\ldots \\
G0111 &= 1 \\
G1111 &= 1 \\
Y[0] &= \text{LOOKUP}_4(G0000, G1000, \ldots, G1111, X[0], X[1], X[2], X[3])
\end{align*}
\]
We can translate this pseudocode into an actual NAND-CIRC program by adding three lines to define variables zero and one that are initialized to 0 and 1 respectively, and then replacing a statement such as $Gxxx = 0$ with $Gxxx = \text{NAND}(\text{one}, \text{one})$ and a statement such as $Gxxx = 1$ with $Gxxx = \text{NAND}(\text{zero}, \text{zero})$. The call to \text{LOOKUP}_4 will be replaced by the NAND-CIRC program that computes $\text{LOOKUP}_4$, plugging in the appropriate inputs.
There was nothing about the above reasoning that was particular to the function $G$ above. Given every function $F : \{0, 1\}^n \rightarrow \{0, 1\}$, we can write a NAND-CIRC program that does the following:
1. Initialize $2^n$ variables of the form $F000\ldots0$ till $F11\ldots1$ so that for every $z \in \{0, 1\}^n$, the variable corresponding to $z$ is assigned the value $F(z)$.
2. Compute $\text{LOOKUP}_n$ on the $2^n$ variables initialized in the previous step, with the index variable being the input variables $X[0], \ldots, X[n-1]$. That is, just like in the pseudocode for $G$ above, we use $Y[0] = \text{LOOKUP}(F0000, \ldots, F1111, X[0], \ldots, X[n-1])$
The total number of lines in the resulting program is $3 + 2^n$ lines for initializing the variables plus the $4 \cdot 2^n$ lines that we pay for computing $\text{LOOKUP}_n$. This completes the proof of Theorem 4.12.
\begin{remark}
Result in perspective. While Theorem 4.12 seems striking at first, in retrospect, it is perhaps not that surprising that every finite function can be computed with a NAND-CIRC program. After all, a finite function $F : \{0, 1\}^n \rightarrow \{0, 1\}^m$ can be represented by simply the list of its outputs for each one of the $2^n$ input values. So it makes sense that we could write a NAND-CIRC program of similar size to compute it. What is more interesting is that some functions, such as addition and multiplication, have...
4.4.2 Improving by a factor of $n$ (optional)
By being a little more careful, we can improve the bound of Theorem 4.12 and show that every function $F : \{0, 1\}^n \to \{0, 1\}^m$ can be computed by a NAND-CIRC program of at most $O(m2^n/n)$ lines. In other words, we can prove the following improved version:
**Theorem 4.15 — Universality of NAND circuits, improved bound.** There exists a constant $c > 0$ such that for every $n, m > 0$ and function $f : \{0, 1\}^n \to \{0, 1\}^m$, there is a NAND-CIRC program with at most $c \cdot m2^n/n$ lines that computes the function $f$. ³
Proof. As before, it is enough to prove the case that $m = 1$. Hence we let $f : \{0, 1\}^n \to \{0, 1\}$, and our goal is to prove that there exists a NAND-CIRC program of $O(2^n/n)$ lines (or equivalently a Boolean circuit of $O(2^n/n)$ gates) that computes $f$.
We let $k = \log(n - 2 \log n)$ (the reasoning behind this choice will become clear later on). We define the function $g : \{0, 1\}^k \to \{0, 1\}^{2^{n-k}}$ as follows:
$$g(a) = f(a0^{n-k})f(a0^{n-k-1}) \cdots f(a1^{n-k}).$$
In other words, if we use the usual binary representation to identify the numbers $\{0, \ldots, 2^{n-k} - 1\}$ with the strings $\{0, 1\}^{n-k}$, then for every $a \in \{0, 1\}^k$ and $b \in \{0, 1\}^{n-k}$
$$g(a)_b = f(ab). \quad (4.2)$$
(4.2) means that for every $x \in \{0, 1\}^n$, if we write $x = ab$ with $a \in \{0, 1\}^k$ and $b \in \{0, 1\}^{n-k}$ then we can compute $f(x)$ by first
---
³ The constant $c$ in this theorem is at most 10 and in fact can be arbitrarily close to 1, see Section 4.8.
computing the string $T = g(a)$ of length $2^{n-k}$, and then computing $\text{LOOKUP}_{n-k}(T, b)$ to retrieve the element of $T$ at the position corresponding to $b$ (see Fig. 4.8). The cost to compute the $\text{LOOKUP}_{n-k}$ is $O(2^{n-k})$ lines/gates and the cost in NAND-CIRC lines (or Boolean gates) to compute $f$ is at most
$$\text{cost}(g) + O(2^{n-k}), \quad (4.3)$$
where $\text{cost}(g)$ is the number of operations (i.e., lines of NAND-CIRC programs or gates in a circuit) needed to compute $g$.
To complete the proof we need to give a bound on $\text{cost}(g)$. Since $g$ is a function mapping $\{0, 1\}^k$ to $\{0, 1\}^{2^{n-k}}$, we can also think of it as a collection of $2^{n-k}$ functions $g_0, \ldots, g_{2^{n-k}-1} : \{0, 1\}^k \to \{0, 1\}$, where $g_i(x) = g(a)_i$ for every $a \in \{0, 1\}^k$ and $i \in [2^{n-k}]$. (That is, $g_i(a)$ is the $i$-th bit of $g(a)$.) Naively, we could use Theorem 4.12 to compute each $g_i$ in $O(2^k)$ lines, but then the total cost is $O(2^{n-k} \cdot 2^k) = O(2^n)$ which does not save us anything. However, the crucial observation is that there are only $2^{2^k}$ distinct functions mapping $\{0, 1\}^k$ to $\{0, 1\}$.
For example, if $g_{17}$ is an identical function to $g_{67}$ that means that if we already computed $g_{17}(a)$ then we can compute $g_{67}(a)$ using only a constant number of operations: simply copy the same value! In general, if you have a collection of $N$ functions $g_0, \ldots, g_{N-1}$ mapping $\{0, 1\}^k$ to $\{0, 1\}$, of which at most $S$ are distinct then for every value $a \in \{0, 1\}^k$ we can compute the $N$ values $g_0(a), \ldots, g_{N-1}(a)$ using at most $O(S \cdot 2^k + N)$ operations (see Fig. 4.9).
In our case, because there are at most $2^{2^k}$ distinct functions mapping $\{0, 1\}^k$ to $\{0, 1\}$, we can compute the function $g$ (and hence by (4.2) also $f$) using at most
$$O(2^{2k} \cdot 2^k + 2^{n-k}) \quad (4.4)$$
operations. Now all that is left is to plug into (4.4) our choice of $k = \log(n - 2 \log n)$. By definition, $2^k = n - 2 \log n$, which means that (4.4) can be bounded
$$O \left( 2^{n-2 \log n} \cdot (n - 2 \log n) + 2^{n-\log(n-2 \log n)} \right) \leq$$
$$O \left( \frac{2^n}{n^2} \cdot n + \frac{2^n}{n-2 \log n} \right) \leq O \left( \frac{2^n}{n^2} + \frac{2^n}{n-2 \log n} \right) = O \left( \frac{2^n}{n^2} \right)$$
which is what we wanted to prove. (We used above the fact that $n - 2 \log n \geq 0.5 \log n$ for sufficiently large $n$.)
$$\blacksquare$$
Using the connection between NAND-CIRC programs and Boolean circuits, an immediate corollary of Theorem 4.15 is the following improvement to Theorem 4.13:
176 introduction to theoretical computer science
Theorem 4.16 — Universality of Boolean circuits, improved bound. There exists some constant $c > 0$ such that for every $n, m > 0$ and function $f : \{0, 1\}^n \rightarrow \{0, 1\}^m$, there is a Boolean circuit with at most $c \cdot m 2^n / n$ gates that computes the function $f$.
4.5 COMPUTING EVERY FUNCTION: AN ALTERNATIVE PROOF
Theorem 4.13 is a fundamental result in the theory (and practice!) of computation. In this section, we present an alternative proof of this basic fact that Boolean circuits can compute every finite function. This alternative proof gives a somewhat worse quantitative bound on the number of gates but it has the advantage of being simpler, working directly with circuits and avoiding the usage of all the syntactic sugar machinery. (However, that machinery is useful in its own right, and will find other applications later on.)
Theorem 4.17 — Universality of Boolean circuits (alternative phrasing). There exists some constant $c > 0$ such that for every $n, m > 0$ and function $f : \{0, 1\}^n \rightarrow \{0, 1\}^m$, there is a Boolean circuit with at most $c \cdot m \cdot 2^n$ gates that computes the function $f$.
Proof Idea:
The idea of the proof is illustrated in Fig. 4.10. As before, it is enough to focus on the case that $m = 1$ (the function $f$ has a single output), since we can always extend this to the case of $m > 1$ by looking at the composition of $m$ circuits each computing a different output bit of the function $f$. We start by showing that for every $\alpha \in \{0, 1\}^n$, there is an $O(n)$-sized circuit that computes the function $\delta_\alpha : \{0, 1\}^n \rightarrow \{0, 1\}$ defined as follows: $\delta_\alpha(x) = 1$ iff $x = \alpha$ (that is, $\delta_\alpha$ outputs 0 on all inputs except the input $\alpha$). We can then write any function $f : \{0, 1\}^n \rightarrow \{0, 1\}$ as the OR of at most $2^n$ functions $\delta_\alpha$ for the $\alpha$’s on which $f(\alpha) = 1$.
$\star$
Proof of Theorem 4.17. We prove the theorem for the case $m = 1$. The result can be extended for $m > 1$ as before (see also Exercise 4.9). Let $f : \{0, 1\}^n \rightarrow \{0, 1\}$. We will prove that there is an $O(n \cdot 2^n)$-sized Boolean circuit to compute $f$ in the following steps:
1. We show that for every $\alpha \in \{0, 1\}^n$, there is an $O(n)$-sized circuit that computes the function $\delta_\alpha : \{0, 1\}^n \rightarrow \{0, 1\}$, where $\delta_\alpha(x) = 1$ iff $x = \alpha$.
2. We then show that this implies the existence of an $O(n \cdot 2^n)$-sized circuit that computes $f$, by writing $f(x)$ as the OR of $\delta_\alpha(x)$ for all $\alpha \in \{0, 1\}^n$.
Syntactic sugar, and computing every function
For every string \( \alpha \in \{0, 1\}^n \), there is a Boolean circuit of \( O(n) \) gates to compute the function \( \delta_\alpha : \{0, 1\}^n \to \{0, 1\} \) such that \( \delta_\alpha(x) = 1 \) if and only if \( x = \alpha \). The circuit is very simple. Given input \( x_0, \ldots, x_{n-1} \) we compute the AND of \( z_0, \ldots, z_{n-1} \) where \( z_i = x_i \) if \( \alpha_i = 1 \) and \( z_i = \text{NOT}(x_i) \) if \( \alpha_i = 0 \).
4.6 THE CLASS \( \text{SIZE}_{n,m}(s) \)
We have seen that every function \( f : \{0, 1\}^n \to \{0, 1\}^m \) can be computed by a circuit of size \( O(m \cdot 2^n) \), and some functions (such as addition and multiplication) can be computed by much smaller circuits.
We define \( \text{SIZE}_{n,m}(s) \) to be the set of functions mapping \( n \) bits to \( m \) bits that can be computed by NAND circuits of at most \( s \) gates (or equivalently, by NAND-CIRC programs of at most \( s \) lines). Formally, the definition is as follows:
\[ \text{SIZE}_{n,m}(s) = \{ f : \{0, 1\}^n \to \{0, 1\}^m \mid \exists C \text{ NAND circuit of size } s \text{ computing } f \} \]
Definition 4.18 — Size class of functions. For all natural numbers \( n, m, s \), let \( \text{SIZE}_{n,m}(s) \) denote the set of all functions \( f : \{0, 1\}^n \rightarrow \{0, 1\}^m \) such that there exists a NAND circuit of at most \( s \) gates computing \( f \). We denote by \( \text{SIZE}_n(s) \) the set \( \text{SIZE}_{n,1}(s) \). For every integer \( s \geq 1 \), we let \( \text{SIZE}(s) = \bigcup_{n,m} \text{SIZE}_{n,m}(s) \) be the set of all functions \( f \) for which there exists a NAND circuit of at most \( s \) gates that compute \( f \).
Fig. 4.12 depicts the set \( \text{SIZE}_{n,1}(s) \). Note that \( \text{SIZE}_{n,m}(s) \) is a set of functions, not of programs! Asking if a program or a circuit is a member of \( \text{SIZE}_{n,m}(s) \) is a category error as in the sense of Fig. 4.13. As we discussed in Section 3.7.2 (and Section 2.6.1), the distinction between programs and functions is absolutely crucial. You should always remember that while a program computes a function, it is not equal to a function. In particular, as we’ve seen, there can be more than one program to compute the same function.
Figure 4.12: There are \( 2^{2^n} \) functions mapping \( \{0, 1\}^n \) to \( \{0, 1\} \), and an infinite number of circuits with \( n \) bit inputs and a single bit of output. Every circuit computes one function, but every function can be computed by many circuits. We say that \( f \in \text{SIZE}_{n,1}(s) \) if the smallest circuit that computes \( f \) has \( s \) or fewer gates. For example \( \text{XOR}_n \in \text{SIZE}_{n,1}(4n) \). Theorem 4.12 shows that every function \( g \) is computable by some circuit of at most \( c \cdot 2^n / n \) gates, and hence \( \text{SIZE}_{n,1}(c \cdot 2^n / n) \) corresponds to the set of all functions from \( \{0, 1\}^n \) to \( \{0, 1\} \).
While we defined \( \text{SIZE}_n(s) \) with respect to NAND gates, we would get essentially the same class if we defined it with respect to AND/OR/NOT gates:
Lemma 4.19 Let \( \text{SIZE}^{\text{AO\,\,N}}_{n,m}(s) \) denote the set of all functions \( f : \{0, 1\}^n \rightarrow \{0, 1\}^m \) that can be computed by an AND/OR/NOT Boolean circuit of at most \( s \) gates. Then,
\[
\text{SIZE}_{n,m}(s/2) \subseteq \text{SIZE}^{\text{AO\,\,N}}_{n,m}(s) \subseteq \text{SIZE}_{n,m}(3s)
\]
Proof. If \( f \) can be computed by a NAND circuit of at most \( s/2 \) gates, then by replacing each NAND with the two gates NOT and AND, we can obtain an AND/OR/NOT Boolean circuit of at most \( s \) gates that
computes \( f \). On the other hand, if \( f \) can be computed by a Boolean AND/OR/NOT circuit of at most \( s \) gates, then by Theorem 3.12 it can be computed by a NAND circuit of at most \( 3s \) gates.
The results we have seen in this chapter can be phrased as showing that \( \text{ADD}_n \in \text{SIZE}_{2n,n+1}(100n) \) and \( \text{MULT}_n \in \text{SIZE}_{2n,2n}(10000n^{\log_2 3}) \). Theorem 4.12 shows that for some constant \( c \), \( \text{SIZE}_{n,m}(cm2^n) \) is equal to the set of all functions from \( \{0, 1\}^n \) to \( \{0, 1\}^m \).
**Remark 4.20 — Finite vs infinite functions.** Unlike programming languages such as Python, C or JavaScript, the NAND-CIRC and AON-CIRC programming language do not have arrays. A NAND-CIRC program \( P \) has some fixed number \( n \) and \( m \) of inputs and output variable. Hence, for example, there is no single NAND-CIRC program that can compute the increment function \( \text{INC} : \{0, 1\}^* \to \{0, 1\}^* \) that maps a string \( x \) (which we identify with a number via the binary representation) to the string that represents \( x + 1 \). Rather for every \( n > 0 \), there is a NAND-CIRC program \( P_n \) that computes the restriction \( \text{INC}_n \) of the function \( \text{INC} \) to inputs of length \( n \). Since it can be shown that for every \( n > 0 \) such a program \( P_n \) exists of length at most \( 10n \), \( \text{INC}_n \in \text{SIZE}_{n,n+1}(10n) \) for every \( n > 0 \).
For the time being, our focus will be on finite functions, but we will discuss how to extend the definition of size complexity to functions with unbounded input lengths later on in Section 13.6.
**Solved Exercise 4.1 — \( \text{SIZE} \) closed under complement.** In this exercise we prove a certain “closure property” of the class \( \text{SIZE}_n(s) \). That is, we show that if \( f \) is in this class then (up to some small additive term) so is the complement of \( f \), which is the function \( g(x) = 1 - f(x) \).
Prove that there is a constant \( c \) such that for every \( f : \{0, 1\}^n \to \{0, 1\} \) and \( s \in \mathbb{N} \), if \( f \in \text{SIZE}_n(s) \) then \( 1 - f \in \text{SIZE}_n(s + c) \).
**Solution:**
If \( f \in \text{SIZE}_n(s) \) then there is an \( s \)-line NAND-CIRC program \( P \) that computes \( f \). We can rename the variable \( Y[0] \) in \( P \) to a variable \( \text{temp} \) and add the line
\[
Y[0] = \text{NAND}(\text{temp}, \text{temp})
\]
at the very end to obtain a program \( P' \) that computes \( 1 - f \).
Chapter Recap
- We can define the notion of computing a function via a simplified “programming language”, where computing a function $F$ in $T$ steps would correspond to having a $T$-line NAND-CIRC program that computes $F$.
- While the NAND-CIRC programming only has one operation, other operations such as functions and conditional execution can be implemented using it.
- Every function $f : \{0, 1\}^n \to \{0, 1\}^m$ can be computed by a circuit of at most $O(m2^n)$ gates (and in fact at most $O(m2^n/n)$ gates).
- Sometimes (or maybe always?) we can translate an efficient algorithm to compute $f$ into a circuit that computes $f$ with a number of gates comparable to the number of steps in this algorithm.
4.7 EXERCISES
Exercise 4.1 — Pairing. This exercise asks you to give a one-to-one map from $\mathbb{N}^2$ to $\mathbb{N}$. This can be useful to implement two-dimensional arrays as “syntactic sugar” in programming languages that only have one-dimensional arrays.
1. Prove that the map $F(x, y) = 2^x 3^y$ is a one-to-one map from $\mathbb{N}^2$ to $\mathbb{N}$.
2. Show that there is a one-to-one map $F : \mathbb{N}^2 \to \mathbb{N}$ such that for every $x, y$, $F(x, y) \leq 100 \cdot \max\{x, y\}^2 + 100$.
3. For every $k$, show that there is a one-to-one map $F : \mathbb{N}^k \to \mathbb{N}$ such that for every $x_0, \ldots, x_{k-1} \in \mathbb{N}$, $F(x_0, \ldots, x_{k-1}) \leq 100 \cdot (x_0 + x_1 + \ldots + x_{k-1} + 100k)^k$.
Exercise 4.2 — Computing MUX. Prove that the NAND-CIRC program below computes the function $MUX$ (or $LOOKUP_1$) where $MUX(a, b, c)$ equals $a$ if $c = 0$ and equals $b$ if $c = 1$:
$$
\begin{align*}
t &= \text{NAND}(X[2], X[2]) \\
u &= \text{NAND}(X[0], t) \\
v &= \text{NAND}(X[1], X[2]) \\
Y[0] &= \text{NAND}(u, v)
\end{align*}
$$
Exercise 4.3 — At least two / Majority. Give a NAND-CIRC program of at most 6 lines to compute the function \( \text{MAJ} : \{0, 1\}^3 \rightarrow \{0, 1\} \) where \( \text{MAJ}(a, b, c) = 1 \) iff \( a + b + c \geq 2 \).
Exercise 4.4 — Conditional statements. In this exercise we will explore Theorem 4.6: transforming NAND-CIRC-IF programs that use code such as if .. then .. else .. to standard NAND-CIRC programs.
1. Give a “proof by code” of Theorem 4.6: a program in a programming language of your choice that transforms a NAND-CIRC-IF program \( P \) into a “sugar-free” NAND-CIRC program \( P' \) that computes the same function. See footnote for hint.
2. Prove the following statement, which is the heart of Theorem 4.6: suppose that there exists an \( s \)-line NAND-CIRC program to compute \( f : \{0, 1\}^n \rightarrow \{0, 1\} \) and an \( s' \)-line NAND-CIRC program to compute \( g : \{0, 1\}^n \rightarrow \{0, 1\} \). Prove that there exist a NAND-CIRC program of at most \( s + s' + 10 \) lines to compute the function \( h : \{0, 1\}^{n+1} \rightarrow \{0, 1\} \) where \( h(x_0, \ldots, x_{n-1}, x_n) \) equals \( f(x_0, \ldots, x_{n-1}) \) if \( x_n = 0 \) and equals \( g(x_0, \ldots, x_{n-1}) \) otherwise. (All programs in this item are standard “sugar-free” NAND-CIRC programs.)
Exercise 4.5 — Half and full adders. 1. A half adder is the function \( \text{HA} : \{0, 1\}^2 \rightarrow \{0, 1\}^2 \) that corresponds to adding two binary bits. That is, for every \( a, b \in \{0, 1\} \), \( \text{HA}(a, b) = (e, f) \) where \( 2e + f = a + b \). Prove that there is a NAND circuit of at most five NAND gates that computes \( \text{HA} \).
2. A full adder is the function \( \text{FA} : \{0, 1\}^3 \rightarrow \{0, 1\}^2 \) that takes in two bits and a “carry” bit and outputs their sum. That is, for every \( a, b, c \in \{0, 1\} \), \( \text{FA}(a, b, c) = (e, f) \) such that \( 2e + f = a + b + c \). Prove that there is a NAND circuit of at most nine NAND gates that computes \( \text{FA} \).
3. Prove that if there is a NAND circuit of \( c \) gates that computes \( \text{FA} \), then there is a circuit of \( cn \) gates that computes \( \text{ADD}_n \), where (as in Theorem 4.7) \( \text{ADD}_n : \{0, 1\}^{2n} \rightarrow \{0, 1\}^{n+1} \) is the function that outputs the addition of two input \( n \)-bit numbers. See footnote for hint.
4. Show that for every \( n \) there is a NAND-CIRC program to compute \( \text{ADD}_n \) with at most \( 9n \) lines.
\(^{4}\) You can start by transforming \( P \) into a NAND-CIRC-PROC program that uses procedure statements, and then use the code of Fig. 4.3 to transform the latter into a “sugar-free” NAND-CIRC program.
\(^{5}\) Use a “cascade” of adding the bits one after the other, starting with the least significant digit, just like in the elementary-school algorithm.
Exercise 4.6 — Addition. Write a program using your favorite programming language that on input of an integer $n$, outputs a NAND-CIRC program that computes $ADD_n$. Can you ensure that the program it outputs for $ADD_n$ has fewer than $10n$ lines?
Exercise 4.7 — Multiplication. Write a program using your favorite programming language that on input of an integer $n$, outputs a NAND-CIRC program that computes $MULT_n$. Can you ensure that the program it outputs for $MULT_n$ has fewer than $1000 \cdot n^2$ lines?
Exercise 4.8 — Efficient multiplication (challenge). Write a program using your favorite programming language that on input of an integer $n$, outputs a NAND-CIRC program that computes $MULT_n$ and has at most $10000n^{1.9}$ lines. What is the smallest number of lines you can use to multiply two 2048 bit numbers?
Exercise 4.9 — Multibit function. In the text Theorem 4.12 is only proven for the case $m = 1$. In this exercise you will extend the proof for every $m$.
Prove that
1. If there is an $s$-line NAND-CIRC program to compute $f : \{0, 1\}^n \rightarrow \{0, 1\}$ and an $s'$-line NAND-CIRC program to compute $f' : \{0, 1\}^n \rightarrow \{0, 1\}$ then there is an $s + s'$-line program to compute the function $g : \{0, 1\}^n \rightarrow \{0, 1\}^2$ such that $g(x) = (f(x), f'(x))$.
2. For every function $f : \{0, 1\}^n \rightarrow \{0, 1\}^m$, there is a NAND-CIRC program of at most $10m \cdot 2^n$ lines that computes $f$. (You can use the $m = 1$ case of Theorem 4.12, as well as Item 1.)
Exercise 4.10 — Simplifying using syntactic sugar. Let $P$ be the following NAND-CIRC program:
\begin{verbatim}
Temp[0] = NAND(X[0],X[0])
Temp[1] = NAND(X[1],X[1])
Temp[2] = NAND(Temp[0],Temp[1])
Temp[3] = NAND(X[2],X[2])
Temp[4] = NAND(X[3],X[3])
Temp[5] = NAND(Temp[3],Temp[4])
Temp[6] = NAND(Temp[2],Temp[2])
Temp[7] = NAND(Temp[5],Temp[5])
Y[0] = NAND(Temp[6],Temp[7])
\end{verbatim}
1. Write a program $P'$ with at most three lines of code that uses both NAND as well as the syntactic sugar OR that computes the same function as $P$.
2. Draw a circuit that computes the same function as $P$ and uses only AND and NOT gates.
In the following exercises you are asked to compare the power of pairs of programming languages. By “comparing the power” of two programming languages $X$ and $Y$ we mean determining the relation between the set of functions that are computable using programs in $X$ and $Y$ respectively. That is, to answer such a question you need to do both of the following:
1. Either prove that for every program $P$ in $X$ there is a program $P'$ in $Y$ that computes the same function as $P$, or give an example for a function that is computable by an $X$-program but not computable by a $Y$-program.
and
2. Either prove that for every program $P$ in $Y$ there is a program $P'$ in $X$ that computes the same function as $P$, or give an example for a function that is computable by a $Y$-program but not computable by an $X$-program.
When you give an example as above of a function that is computable in one programming language but not the other, you need to prove that the function you showed is (1) computable in the first programming language and (2) not computable in the second programming language.
**Exercise 4.11 — Compare IF and NAND.** Let IF-CIRC be the programming language where we have the following operations $\text{foo} = 0$, $\text{foo} = 1$, $\text{foo} = \text{IF}(\text{cond}, \text{yes}, \text{no})$ (that is, we can use the constants 0 and 1, and the $\text{IF} : \{0, 1\}^3 \to \{0, 1\}$ function such that $\text{IF}(a, b, c)$ equals $b$ if $a = 1$ and equals $c$ if $a = 0$). Compare the power of the NAND-CIRC programming language and the IF-CIRC programming language.
**Exercise 4.12 — Compare XOR and NAND.** Let XOR-CIRC be the programming language where we have the following operations $\text{foo} = \text{XOR}(\text{bar}, \text{blah})$, $\text{foo} = 1$ and $\text{bar} = 0$ (that is, we can use the constants 0, 1 and the XOR function that maps $a, b \in \{0, 1\}^2$ to $a + b \mod 2$). Compare the power of the NAND-CIRC programming language and the XOR-CIRC programming language. See footnote for hint.
**Exercise 4.13 — Circuits for majority.** Prove that there is some constant $c$ such that for every $n > 1$, $\text{MAJ}_n \in \text{SIZE}_n(cn)$ where $\text{MAJ}_n : \{0, 1\}^n \rightarrow \{0, 1\}$ is the majority function on $n$ input bits. That is $\text{MAJ}_n(x) = 1$ iff $\sum_{i=0}^{n-1} x_i > n/2$. See footnote for hint.\(^8\)
---
**Exercise 4.14 — Circuits for threshold.** Prove that there is some constant $c$ such that for every $n > 1$, and integers $a_0, ..., a_{n-1}, b \in \{-2^n, -2^n + 1, ..., -1, 0, +1, ..., 2^n\}$, there is a NAND circuit with at most $n^c$ gates that computes the threshold function $f_{a_0, ..., a_{n-1}, b} : \{0, 1\}^n \rightarrow \{0, 1\}$ that on input $x \in \{0, 1\}^n$ outputs 1 if and only if $\sum_{i=0}^{n-1} a_i x_i > b$.
---
### 4.8 BIBLIOGRAPHICAL NOTES
See Jukna’s and Wegener’s books [Juk12; Weg87] for much more extensive discussion on circuits. Shannon showed that every Boolean function can be computed by a circuit of exponential size [Sha38]. The improved bound of $c \cdot 2^n / n$ (with the optimal value of $c$ for many bases) is due to Lupanov [Lup58]. An exposition of this for the case of NAND (where $c = 1$) is given in Chapter 4 of his book [Lup84]. (Thanks to Sasha Golovnev for tracking down this reference!)
The concept of “syntactic sugar” is also known as “macros” or “meta-programming” and is sometimes implemented via a preprocessor or macro language in a programming language or a text editor. One modern example is the Babel JavaScript syntax transformer, that converts JavaScript programs written using the latest features into a format that older Browsers can accept. It even has a plug-in architecture, that allows users to add their own syntactic sugar to the language.
---
\(^8\) One approach to solve this is using recursion and the so-called Master Theorem.
|
{"Source-Url": "https://files.boazbarak.org/introtcs/lec_03a_computing_every_function.pdf", "len_cl100k_base": 16358, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 85150, "total-output-tokens": 18032, "length": "2e13", "weborganizer": {"__label__adult": 0.0003325939178466797, "__label__art_design": 0.00041794776916503906, "__label__crime_law": 0.0002655982971191406, "__label__education_jobs": 0.0026531219482421875, "__label__entertainment": 9.101629257202148e-05, "__label__fashion_beauty": 0.00015091896057128906, "__label__finance_business": 0.0002818107604980469, "__label__food_dining": 0.0004165172576904297, "__label__games": 0.0007176399230957031, "__label__hardware": 0.00128936767578125, "__label__health": 0.0003705024719238281, "__label__history": 0.000278472900390625, "__label__home_hobbies": 0.00015878677368164062, "__label__industrial": 0.0005068778991699219, "__label__literature": 0.000507354736328125, "__label__politics": 0.0002058744430541992, "__label__religion": 0.0005321502685546875, "__label__science_tech": 0.03509521484375, "__label__social_life": 0.00010597705841064452, "__label__software": 0.005970001220703125, "__label__software_dev": 0.94873046875, "__label__sports_fitness": 0.00026798248291015625, "__label__transportation": 0.0006098747253417969, "__label__travel": 0.00017714500427246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52212, 0.03667]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52212, 0.82389]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52212, 0.8374]], "google_gemma-3-12b-it_contains_pii": [[0, 1904, false], [1904, 4003, null], [4003, 6013, null], [6013, 7604, null], [7604, 8605, null], [8605, 10846, null], [10846, 12177, null], [12177, 13639, null], [13639, 15229, null], [15229, 17019, null], [17019, 18857, null], [18857, 21417, null], [21417, 23421, null], [23421, 25952, null], [25952, 28274, null], [28274, 29866, null], [29866, 32531, null], [32531, 35237, null], [35237, 36408, null], [36408, 38949, null], [38949, 41497, null], [41497, 43294, null], [43294, 46159, null], [46159, 48079, null], [48079, 50358, null], [50358, 52212, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1904, true], [1904, 4003, null], [4003, 6013, null], [6013, 7604, null], [7604, 8605, null], [8605, 10846, null], [10846, 12177, null], [12177, 13639, null], [13639, 15229, null], [15229, 17019, null], [17019, 18857, null], [18857, 21417, null], [21417, 23421, null], [23421, 25952, null], [25952, 28274, null], [28274, 29866, null], [29866, 32531, null], [32531, 35237, null], [35237, 36408, null], [36408, 38949, null], [38949, 41497, null], [41497, 43294, null], [43294, 46159, null], [46159, 48079, null], [48079, 50358, null], [50358, 52212, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52212, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52212, null]], "pdf_page_numbers": [[0, 1904, 1], [1904, 4003, 2], [4003, 6013, 3], [6013, 7604, 4], [7604, 8605, 5], [8605, 10846, 6], [10846, 12177, 7], [12177, 13639, 8], [13639, 15229, 9], [15229, 17019, 10], [17019, 18857, 11], [18857, 21417, 12], [21417, 23421, 13], [23421, 25952, 14], [25952, 28274, 15], [28274, 29866, 16], [29866, 32531, 17], [32531, 35237, 18], [35237, 36408, 19], [36408, 38949, 20], [38949, 41497, 21], [41497, 43294, 22], [43294, 46159, 23], [46159, 48079, 24], [48079, 50358, 25], [50358, 52212, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52212, 0.04569]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
f6f8c0a4d8853d3a22cb762ffb90b0dcaa3ea5fb
|
Static Detection of Dynamic Memory Errors
David Evans
evs@larch.lcs.mit.edu
MIT Laboratory for Computer Science
Abstract
Many important classes of bugs result from invalid assumptions about the results of functions and the values of parameters and global variables. Using traditional methods, these bugs cannot be detected efficiently at compile-time, since detailed cross-procedural analyses would be required to determine the relevant assumptions. In this work, we introduce annotations to make certain assumptions explicit at interface points. An efficient static checking tool that exploits these annotations can detect a broad class of errors including misuses of null pointers, uses of dead storage, memory leaks, and dangerous aliasing. This technique has been used successfully to fix memory management problems in a large program.
1 Introduction
The LCLint checking tool [4, 2] has been used effectively in both industry and academia to detect errors in programs, facilitate enhancements to legacy code, and support a programming methodology based on abstract types and explicit interfaces in C. In this work, we extend LCLint to detect a broad class of important errors including misuses of null pointers, failures to allocate or deallocate memory, uses of undefined or deallocated storage, and dangerous or unexpected aliasing. These errors are particularly difficult to detect and correct through testing, since their symptoms are often platform dependent and may be far-removed from the actual problem. Since these errors typically involve violations of non-local constraints, they cannot be detected efficiently at compile-time by traditional methods.
Consider the sample code fragment in Figure 1. The function setName assigns the formal parameter pname to the global variable gname. This code may be a correct implementation of some function, but it depends on many assumptions not apparent from the implementation:
- before the call, gname must not be the sole reference to allocated storage. Otherwise, the assignment statement on line 4 loses the last reference to this storage and it can never be deallocated.
- after the call, the actual parameter and the global gname are aliased. The caller must not deallocate the storage pointed to by the parameter if any code executed later depends on gname (and vice versa).
- after the call, gname may not be dereferenced if the parameter was a null pointer. Further, gname may not be dereferenced as an rvalue if the parameter did not point to defined storage.
As is, we cannot determine if a call to setName will cause the program to crash or leak memory without careful analysis of the entire program. This analysis would be infeasible for all but the most trivial programs.
To enable local reasoning, we need more information about the code. We extend the LCL interface specification language [5, 9] to provide ways of expressing assumptions about memory allocation, initialization and sharing, and introduce annotations to make it convenient to express these assumptions using qualifiers on declarations in C programs.
There have been many academic and commercial projects aimed at producing tools that detect these kinds of errors at run-time (dmalloc [10], mprof [11], and Purify [Pure, Inc.]). These tools can be effective in localizing the symptom of a bug — where a null pointer is dereferenced or where leaking memory is being allocated. In some cases, this is enough to discover the actual bug in the code. In others, however, it may only be the beginning of the search. Run-time checking also suffers from the flaw that its effectiveness depends entirely on running the right test cases to reveal the problems. This is especially problematic since these tools are expensive and intrusive enough that they are often not used when the code is run in production.
In our work, annotations are used to make assumptions about function interfaces, variables and types explicit. Constraints necessary to satisfy these assumptions are checked at compile-time. Places where the constraints are violated are anomalies in the code, which
typically indicate bugs in the program or undocumented or incorrect assumptions. Section 2 describes how checking works at a high level, and Section 5 describes the analysis in more detail. Section 3 describes the storage model and what kinds of uses of storage are irregular. Section 4 describes some of the annotations that can be added to programs to make certain assumptions explicit, and checking associated with each annotation. Section 6 illustrates the process of adding annotations and detecting errors using a small example program. Section 7 relates experience using this approach to fix memory management problems and replace garbage collection with explicit deallocation in a large program.
2 Analysis Overview
Since LCLint is run frequently and on large programs, it is essential that the checking be efficient and scale approximately linearly with the size of the program. Hence, full interprocedural analysis is too expensive to be practical. Instead, each procedure is checked independently, but using more detailed interface information then is normally available. This information may include constraints on the aliases that may be introduced by a called function, constraints on how storage for a parameter or global variable must be defined before a call and how it will be defined after a call, whether parameters and return values may be null or may share storage with other references, and other constraints on what may be modified or used by a called function and how the result of a function call relates to the values of its parameters. This information is available from annotations added to the program.
When a function body is checked, annotations on its parameters and the global variables it uses are assumed to be true when the function is entered. The function body is checked using these assumptions. At all return points, the function must satisfy the constraints implied by the annotations on its return value, parameters, and the global variables it uses.
When a function call site is encountered, LCLint checks that the arguments and global variables used by the function satisfy the assumptions made by the implementation of the called function. The result of the function and the states of parameters and global variables after the call are assumed to satisfy the constraints implied by the function declaration.
By exploiting extra interface information in checking, a wide range of errors can be detected through fairly simple procedural analyses. Dataflow values keep track of extra information for variables, as well as references derived from variables (e.g., a field in a structure pointed to by a variable) when appropriate. This information includes whether or not the reference is defined or may be null, what other storage it may alias or be aliased by, and what other references might share its storage. This information may be different on different program paths. Rules are used to combine values at conjunction points. In cases where values cannot be sensibly combined, the function is incomplete.
Certain simplifying assumptions are used to make compile-time analysis feasible and efficient. The key assumptions are: any predicate expression may be true or false, the effects of any while or for loop are identical to those for executing the loop zero or one times, compile-time unknown index values are treated as either the same element of the array or independent elements (depending on an LCLint flag that may be set locally).
LCLint may produce messages for correct code (e.g., a use-before-definition error in a branch that would only be taken if an earlier branch initialized the variable). The alternative would be not reporting many anomalies that are likely errors. Since spurious messages can be suppressed locally by placing stylized comments around the code that produces the message, this unsoundness has rarely been a serious problem in practice.
LCLint may also fail to produce messages for certain kinds of incorrect code in some contexts. For example, if an alias is not detected because it would be produced only after the second iteration of a loop, LCLint will fail to detect an error involving the use of released storage that is only apparent if the alias is detected. It is harder to estimate the costs of undetected errors, since there is no way of knowing how many undetected errors remain.
Since our goal is to detect as many real bugs as possible efficiently and with no programmer interaction, we are willing to accept an analysis that is neither sound nor complete. Instead of using worst-case assumptions, LCLint uses approximations that follow from likely-case assumptions. Clearly, this would be unacceptable in a compiler optimizer or a theorem prover. However, for a static checking tool it allows many more ambitious checks to be done and more errors to be detected with only the occasionally annoying spurious message.
3 Storage Model
This section describes execution-time concepts for describing the state of storage. Some of these concepts correspond to analysis properties used by LCLint. Certain uses of storage are likely to indicate program bugs, and are reported as anomalies.
LCL assumes a CLU-like object storage model. An object is a typed region of storage. Some objects use a fixed amount of storage that is allocated and deallocated automatically by the compiler. Other objects use dynamic storage that must be managed by the program.
Storage is undefined if it has not been assigned a value, and defined after it has been assigned a value. An object is completely defined if all storage that may be reached from it is defined. What storage is reachable from an object depends on the type and value of the object. For example, if o is a pointer to a structure, o is completely defined if the value of o is NULL, or if every field of the structure o points to is completely defined.
When an expression is used as the left side of an assignment expression we say it is used as an lvalue. Its location in memory is used, but not its value. Undefined storage may be used as an lvalue since only its location is needed. When storage is used in any other way, such as on the right side of an assignment, as an operand to a primitive operator (including the indirection operator, *), or as a function parameter, we say it is used as an rvalue. It is an anomaly to use undefined storage as an rvalue.
A pointer is a typed memory address. A pointer is either live or dead. A live pointer is either NULL or an address within allocated storage. A pointer that points to an object is an object pointer. A pointer that points inside an object (e.g., to the third element of an allocated block) is an offset pointer. A pointer that points to allocated storage that is not defined is an allocated pointer. The result of dereferencing an allocated pointer is undefined storage. Hence, it is an anomaly to use it as an rvalue. A dead (or "dangling") pointer does not point to allocated storage. A pointer becomes dead if the storage it points to is deallocated (e.g., the pointer is passed to the free library function.) It is an anomaly to use a dead pointer as an rvalue.
There is a special object NULL corresponding to the NULL pointer in a C program. A pointer that may have the value NULL is a possibly-
---
1This is similar to the LISP storage model, except that objects are typed.
2Except sizeof, which does not need the value of its argument.
null pointer. It is an anomaly to use a possibly-null pointer where a non-null pointer is expected (e.g., certain function arguments or the indirection operator).
To allow descriptions of memory constraints, we view each object as having an associated owners set. The owners set indicates which external references may legitimately refer to an object. A reference is a variable or a location derived from a variable (e.g., a field of a structure). Different references may share the same storage. For example, if s and t are char pointers, and s is assigned to t, then the references *s and *t are different ways of referring to the same storage. The owners set for the storage *s includes both *s and *t. In a function implementation, an external reference is any reference that is visible in the environment of the caller (i.e., a reference to any storage that can be reached from the parameters, global variables, or return value).
The size of the owners set is less than or equal to the traditional reference count since it includes only external references and references that is valid to dereference (constraints on memory usage may make it invalid to dereference some references, such as those that have been deallocated). It is an anomaly if the owners set for an explicitly allocated object is empty, since this means there are no valid references and the storage associated with the object cannot be released.
Failures to free storage are relevant only when memory is explicitly deallocated by the programmer, and could be avoided by using a garbage collector [1]. If LCLint is used to check programs designed for use with a garbage collector, flags can be used to adjust checking so only those errors relevant in a garbage-collected environment are reported.
4 Annotations
Annotations provide a convenient way of expressing interface assumptions. Although many of the same assumptions are expressible in LCL function specifications, annotations are easier to write and have the important advantage that they can be used to determine appropriate static checking in a straightforward way. We can use annotations in LCL specifications, or directly in the source code as syntactic comments /*@annotation@*/. For example, null in an LCL specification or /*@null*/ in a C source file may be used in a variable declaration to indicate the variable is a possibly-null pointer (i.e., it may have the value NULL).
Annotations may be used in a type declaration to constrain all instances of a type, in function parameter or return value declarations to constrain the value and type of parameters and results, and in global and static variable declarations to constrain the value and use of the variable.
Annotations are syntactically similar to C type qualifiers. More than one annotation may be used with a given declaration, although certain combinations of annotations are incompatible and will produce static errors. An annotation applies only to the outer level of a declaration (e.g., null char **name means that the char ** referenced by name is a possibly-null pointer, but the char * referenced by *name is unqualified.) A type definition can be used to apply annotations to non-outlet level declarations.
The idea of keeping additional state information on variables is similar to that used by the NIL compiler. The NIL compiler [8] extends type checking to also check typestates. Each type has a set of typestates defined by the programming language that can be determined by the compiler at any point in the code. An object can be in only one typestate at a given point in the code, but may change typestates during execution. A subset of all operations of a type are permitted on an object in a particular typestate and operations may be declared to change the typestate of an object. The NIL compiler detects execution sequences that violate typestate constraints at compile time. Some of the memory annotations used by LCLint could be emulated using typestates.
Annotations used by LCLint are simple since our main focus is detecting errors at interface points. ADDS [6] presents an approach for dealing with recursive data structures by constraining possible aliasing relationships within datatypes. Better checking of internal aliasing would improve LCLint checking, but since our focus here is on detecting errors at interface boundaries, the annotations we use are sufficient to detect a wide range of errors.
The remainder of this section describes some of the annotations and associated checking done by LCLint. A complete list of the annotations related to memory checking is found in Appendix B.
Null Pointers
A common cause of program failures is when a null pointer is dereferenced. LCLint detects these errors by distinguishing possibly-null pointers at interface boundaries, and checking that a possibly-null pointer is not dereferenced or used where a non-null pointer is required.
In Figure 2, the null annotation is used to indicate that a possibly-null pointer may be passed as the parameter pname. LCLint will report an error if there is a path leading to a dereference of the pointer along which there is no check to ensure the pointer is not null. Code can check that a possibly-null pointer is not null by using a simple comparison (e.g., x != NULL) or a function call. To indicate that a function returns true when its argument is null the truenull annotation is used on the return value; falsenull is used to indicate that a function returns true only if the argument is not null.
Running LCLint on the version of sample.c in Figure 2 produces the message1:
```
sample.c:6: Function returns with non-null global gname referencing null storage
sample.c:5: Storage gname may become null
```
The error is reported at the exit point. It would not be an anomaly to assign gname to NULL in the body of setName, as long as it is re-assigned to a non-null value before the function returns or another function using the global gname is called.
The error can be fixed by removing the null annotation on the parameter (which would produce messages elsewhere if setName is called with a possibly null value) or adding a null annotation to the declaration of gname (which would produce messages if gname is dereferenced without first checking it is not null). Another fix is shown in Figure 3. Here, a truenull function is called to test
```c
1 extern char *gname;
2 3 void setName (/*@null@*/ char *pname)
4 {
5 gname = pname;
6 }
```
Figure 2: sample.c with null annotation.
extern char *gname;
extern /*@truenull@*/
isNull /*@null@*/ char *x);
void setName /*@null@*/ char *pname) {
if (!isNull (pname)) { gname = pname; }
}
Figure 3: Fixing sample.c by calling a truenull function.
whether pname is null, and the assignment is only done for non-null values.
A variable of a pointer type with no annotation is interpreted as non-null, unless the type was declared using null. In these cases, the type’s null annotation may be overridden for specific declarations of the type using the notnull annotation. This is particularly useful for parameters to hidden (static) operations of abstract types where the null test has already been done before the function is called, and for return values that are never null.
An additional annotation, relnull may be used to relax null checking. A relnull pointer is assumed to be non-null when it is used, but no error is reported if a possibly null value is assigned to it. This is generally used for structure fields that may or may not be null depending on some other constraint. It is up to the programmer to ensure that this constraint is satisfied before the pointer is dereferenced.
Definition
There is an implicit constraint that all function parameters and global variables used by a function are completely defined before a call, and that the return value is completely defined after the call. For example, LCLint will report an error if a pointer actual parameter is allocated but the storage it points to is not defined, or if a field in a structure pointed to by the return value is not defined. Function implementations are checked assuming all parameters and global variables are completely defined at entry to the function.
Occasionally, it is desirable to have parameters or return values that reference undefined or partially defined storage. For example, a pointer may be passed as an argument that is intended as an address to store a result, or a memory allocator may return allocated but undefined storage. The out qualifier can be used to denote storage that may be not be completely defined.
An actual parameter that corresponds to a formal parameter with an out annotation must be defined but need not be completely defined. That is, the actual parameter is used as an rvalue so it must be defined, but storage reachable from the actual parameter need not be defined. LCLint does not report an error when allocated storage is passed as an out parameter. After the call, storage that was passed as an out parameter is assumed to be completely defined.
Within the implementation of a function, LCLint will assume that an out formal parameter is allocated but that storage reachable from the parameter is undefined. Hence, an error is reported if storage derived from it is used as an rvalue before it is defined. An error is reported if the implementation does not define all storage reachable from an out parameter before returning.
An analogous annotation, undef, may be used on a global variable in the globals list for a function to indicate that the global variable may be undefined when the function is called.
The partial qualifier can be used to relax checking of structure fields. A structure qualified with partial may have undefined fields. LCLint reports no errors when these fields are used. Similar to relnull, the reldef qualifier is provided to relax definition checking, and is sometimes useful in field declarations.
Allocation
There are two kinds of deallocation errors with which we are concerned: deallocating storage when there are other live references to the same storage, or failing to deallocate storage before the last reference to it is lost. To handle these deallocation errors, we introduce a concept of an obligation to release storage. Every time storage is allocated, it creates an obligation to release the storage. This obligation is attached to the reference to which the storage is assigned. Before the scope of the reference is exited or it is assigned to a new value, the storage to which it points must be released. Annotations can be used to indicate that this obligation is transferred through a return value, function parameter or assignment to an external reference.
The only annotation is used to indicate that a reference is the only pointer to the object it points to. We can view the reference as having an obligation to release this storage. This obligation is satisfied by transferring it to some other reference in one of three ways:
1. pass it as an actual parameter corresponding to a formal parameter declared with an only annotation
2. assign it to an external reference declared with an only annotation
3. return it as a result declared with an only annotation
After the release obligation is transferred, the original reference is a dead pointer and the storage it points to may not be used. All obligations to release storage stem from allocation routines (e.g., malloc), and are ultimately satisfied by calls to deallocation routines (e.g., free). The standard library provides some allocation and deallocation routines. The basic allocator, malloc, is specified as,
```
null out only void *malloc (size_t size);
```
It returns a possibly-null pointer (it returns NULL when the requested memory cannot be allocated) that is not completely defined and is not referenced by any reference other than the function return value. The deallocator, free, is specified as
```
void free (null out only void *ptr);
```
The argument to free is a possibly-null, not necessarily completely defined, pointer to unshared storage. Since the parameter is declared using only, the caller may not use the referenced object after the call, and may not pass in a reference to a shared object. There is nothing special about malloc and free — their behavior can be described entirely in terms of the provided annotations.
Other annotations can be used to express different assumptions about memory management. The temp annotation is used on a formal parameter to indicate that the called function may not deallocate the storage the parameter refers to or create new external references to this storage. At a call site where a reference is passed as a temp parameter, the aliases to the storage it references are the same before and after the call.
4The ANSI Standard allows a null pointer to be passed to free. Many older C implementations do not support this, so it may be desirable to use an alternative specification with no null annotation.
5To check that allocated objects are completely destroyed (e.g., all unshared objects inside a structure are deallocated before the structure is deallocated), LCLint checks that any parameter passed as an out only void * does not contain references to live, unshared objects. This makes sense, since such a parameter could not be used sensibly in any way other than deallocating its storage.
The annotations and type definitions determine the initial dataflow values of variables and constrain the acceptable values for parameters, global variables, and function results at interface points. Three values are associated with each reference: the definition state (defined, partially defined, allocated, etc.), the null state (definitely null, possibly null, not null, etc.), and the “allocation” state (corresponding to the allocation annotation, e.g., only, temp). These values may change when assignments or function calls occur in the program. An anomaly is reported if values are inconsistent at an interface point.
5 Analysis
The annotations and type definitions determine the initial dataflow values of variables and constrain the acceptable values for parameters, global variables, and function results at interface points. Three values are associated with each reference: the definition state (defined, partially defined, allocated, etc.), the null state (definitely null, possibly null, not null, etc.), and the “allocation” state (corresponding to the allocation annotation, e.g., only, temp). These values may change when assignments or function calls occur in the program. An anomaly is reported if values are inconsistent at an interface point.
Figure 5 shows a buggy program to add a node at the end of a linked list. There are two problems: the case where the parameter l is null is not handled correctly and the next field of the new node allocated on line 21 is never defined. Figure 6 shows the control flow graph that corresponds to list_addh. The circled numbers are used to refer to execution points.
Point l is the function entry point. Here, the dataflow values are set according to the annotations and type definitions. For parameter l, the type definition for list has a null annotation so its null state is possibly-null. It has no definition annotation, so it is completely-defined. Because of the temp annotation, its allocation state is temp. Similarly, the parameter e is characterized as completely-defined, not-null, and only.
Since the function parameter may be assigned to a new value in the function implementation, we need a way of distinguishing a reference that corresponds to the actual parameter from the parameter inside the function body. We introduce a local variable l to represent the parameter in the function body. In this discussion, we use 1 to refer to the local variable and arg1 to refer to the externally visible parameter. At the function entrance, 1 aliases arg1.
At point 2, the null state of 1 is not null. Because of the if statement in line 14, we know at compile-time that 1 is non-null if point 2 is
reached. Conversely, at point 3 we know that l is null.
The while loop is treated identically to an if statement — there is no back edge to represent normal loop execution. This means the analysis can be done efficiently without any need to do iteration. This results in a less accurate approximation for the actual program execution than would be achieved using an iterative dataflow analysis, but it is good enough for the kinds of analyses we do here.
The body of the while loop assigns l->next to l. At point 6, l may alias arg1->next. At point 7, the branches merge. The only difference is that on the true branch l aliases arg1->next and on the false branch l aliases arg1. The possible aliases at confluence points is the union of the possible aliases on each branch. So, at point 7, l may alias arg1 or arg1->next. In reality, l may alias arg1->next \( i \) for any \( i \geq 0 \) (i.e., the loop may be executed any number of times). Since LCLint does not model executions over the loop back edge, the only aliases of l that are detected are arg1 and arg1->next.
At line 21, the result of a call to smalloc is assigned to l->next. The return value of smalloc is annotated out and only, so after the assignment (point 8) l->next is characterized as allocated, non-null, and only. Since l->next may alias arg1->next (and arg1->next->next), the state of arg1->next is also allocated, non-null, and only.
The change in definition state propagates to its base reference, l (and arg1, because of aliasing). Before the assignment, l was completely defined. Now, we have assigned storage derivable from l to a value that is incompletely defined, so l is now characterized as partially-defined.
Line 23 assigns \( e \) to l->next->this. Before the assignment, \( e \) is defined, not-null, and only. The assignment transfers the obligation to release storage, since the this field of the list type is annotated with only. So, the allocation state of e becomes kept. This means its obligation to release storage has been satisfied, but it can still be safely used. (If it had been passed as an only parameter instead, its definition state would become dead to indicate that is may not be used.) Since e aliases arg2, the allocation state of arg2 is also set to kept, and the obligation to released storage implied by the only annotation on the parameter e has been satisfied on this path. After the assignment in line 23, l->next->this is defined. As before, this definition propagates to its base storage, and l->next and l (which is already partially-defined) are marked partially-defined.
At point 10, the two branches merge. On the true branch, the allocation state of e is kept. On the false branch, it is only. This is a confluence error since there is no sensible way to combine the allocation states — one means the storage must be released, and the other means it must not be released. LCLint reports this as a program anomaly. To prevent further errors, the allocation state of e is set to a special error marker.
Also at point 10, we need to merge the dataflow values associated with l and arg1. On the true branch from point 9, l and l->next are partially-defined, l->next->this is defined, and l->next->next is undefined. On the false branch, l is completely defined. Definition states are combined using the weakest assumption. Hence, at point 10, l and l->next are partially-defined, and l->next->next is undefined. The definition states for arg1 and its derived storage are handled similarly.
Point 11 is the function exit. LCLint checks that the function implementation satisfies the external constraints. One implicit constraint is that arg1 must be completely defined when the call returns. Since the definition state of arg1 is partially-defined, LCLint checks that all storage derivable from arg1 is defined. Since arg1->next->next is undefined, LCLint produces an error reporting an incomplete definition anomaly.
### 6 Example
This section demonstrates how annotations can be added to an existing program, thereby improving its documentation and maintainability, and detecting errors in the process. For this example, we use the toy employee database program (1000 lines of source code and 300 lines of interface specifications) described in [5]. In [2], we described how LCLint without dynamic memory checking was used on the original database program. Here, we start with the database program after correcting the errors described there. (For information on obtaining the complete code used in this example, see Appendix A.)
We start with a program with no annotations. LCLint’s interpretations of declarations with no annotations are chosen to make it possible to begin finding errors in an existing program without having to spend a lot of time adding annotations or being overwhelmed by messages. The default interpretations can be controlled by flags, to better suit a particular program.
The interpretation of a declaration with no null pointer or definition annotation is chosen so that the interpretations when annotations are missing place the strictest constraints on actual parameters and return values — they may not be null, and must be completely defined. LCLint checking will alert the programmer to places where this is not the case. These may be errors in the code or places where a null or out annotation should be added.
An unqualified formal parameter is assumed to be temp storage. This places the weakest constraints on actual arguments, but constrains how the parameter may be used in the function implementa-
typedef struct _elem {
_eref val; struct _elem *next;
} *erefElem;
typedef struct {
ercElem *vals; int size;
} *erc;
erc erc_create (void) {
erc c = (erc) malloc (sizeof (*c));
if (c == NULL) {
error ("malloc returned null");
exit (EXIT_FAILURE);
}
c->vals = NULL;
c->size = 0;
return c;
}
Figure 7: erc_create from erc.c
Another. Implicit only annotations can also be applied to return values, structure fields and global variables. For this example, we have not used any of the implicit only annotations, so we will see how the checking leads us to make these annotations explicit.
Adding annotations is an iterative process. With each iteration, LCLint detects some anomalies, annotations are added or discovered bugs are fixed, and LCLint is run again to propagate the new annotations up the call chain. The rest of this section will show how different types of checking lead us to add annotations and make changes to the code. Only a few annotations are necessary to get useful checking, to detect a few real problems in the code, and to enhance the interface documentation.
Null Pointers
One anomaly involving null pointers is reported for the function erc_create (shown in Figure 7):
erc.c:26: Null storage c->vals derivable from return value: c
The vals field of c was assigned to NULL on line 24. In this case, the code is correct and the reported anomaly suggests that a null annotation is needed on the vals field in the type definition for erc:
typedef struct {
/*@null*/ ercElem *vals; int size;
} *erc;
Running LCLint after this change detects three new anomalies. One is in the macro definition of erc_choose for the parameter c of type erc:
erc.h:14: Arrow access from possibly null pointer c->vals:
(c->vals)->val
Since we have added the null annotation to the vals field of erc, c->vals may be a null pointer. So, LCLint detects an anomaly when it is dereferenced by the arrow operator. The specification for erc_choose includes a requires clause\(^6\) constraining the size of the collection to be greater than 0. From this it follows that the value of c->vals is not null. An assertion is added to the code to check that c->vals is not null.
The other two anomalies involve similar problems in other functions. While none of these indicate a bug in the code because of the requires clauses, they do draw our attention to places where there are dependencies on external constraints and the added assertions may be helpful in debugging clients that do not satisfy the requires clauses. The checking has directed us to places where adding assertion checks would be good defensive programming practice. Further, the null annotation on the vals field of the type definition serves as useful documentation.
Allocation
Next, we look for errors involving deallocation. We are starting with a program with no allocation annotations, but using a standard library with annotated versions of malloc and free. For expository purposes, we run LCLint with a command line flag (-allimply) that turns off the implicit only annotations on return values, global variables, and structure fields. Hence, LCLint will produce a message everywhere newly allocated storage is returned or external storage is deallocated. (It would be impractical to check a real program without using implicit annotations.) Seven anomalies are detected by LCLint, all resulting from missing only annotations.
Two messages concern the return statements in erc_create and erc_print. Both functions return a pointer that was the result of a call to malloc. Since the function result has no only annotation, the obligation to release this storage is not transferred to the caller and a memory leak is suspected. Hence, only annotations are added to the function return value declarations.
Four messages concern assignment of allocated storage to fields of a static variable (eref.pool in eref.c). These are fixed by adding only annotations to two fields of the type declaration.
The remaining message concerns the call to free in erc_final:
erc.c:49: Implicitly temp storage c passed as only param: free (c)
Since c is an external parameter with no only qualifier, an anomaly is detected when it is passed to free since it matches a formal parameter declared with an only annotation. The only annotation needs to be added to the parameter declaration for erc_final.
After the changes, LCLint detects six new anomalies. They result from the only annotations that were added to erc propagating to calling functions. They are similar to those we have already seen and can be fixed by adding only annotations to function declarations.
As before, the new annotations propagate up the call chain to produce more messages. Six memory leaks are detected in the test driver code where variables referencing allocated storage are assigned to new values before the old storage is released. After these are fixed by adding calls to free, no allocation anomalies are detected by LCLint. If we had not used the flag to disable the implicit annotations, these six errors would have been found directly. The only annotations that would be needed are the annotations on the parameters.
Aliasing
One aliasing anomaly is reported in employee.set_name (shown in Figure 8):
employee.c:13: Parameter 1 (e->name) to function strcpy is declared unique but may be aliased externally by parameter 2 (s)
4 bool
5 employee_setName (employee *e, char *s)
6 {
7 ... (checks size of s)
8 strcpy (e->name, s);
9 return TRUE;
10 }
Figure 8: employee_setName from employee.c
The specification of strcpy in the standard library is:
```
char *strcpy
(out returned unique char *s1, char *s2);
```
The unique qualifier indicates that s1 must refer to storage that is not shared by any other parameter or accessible global (in this case, the parameter s2). This is necessary since the behavior of strcpy is undefined if the arguments share storage space. Since the arguments to employee_setName are not qualified, it is possible that e->name and s refer to the same storage. We add a unique qualifier to the parameter declaration for s to document that the parameter must not reference any external storage reachable from this function. Since there are no global variables, this means the parameters e and s must not share any storage. Now, if a client calls employee_setName with dependent parameters, LCLint will report an anomaly.
### Summary
A total of 15 annotations were needed to resolve all detected anomalies — one null annotation on a structure field, one out annotation on a parameter (that was detected through complete definition checking), and 13 only annotations. Of the 13 only annotations, only 2 would have been necessary if we had set command-line flags to use implicit annotations. With minimal effort in adding annotations, a few errors in the code were found and the documentation was improved considerably.
### 7 Experience
Part of the motivation for this work was my own troubles dealing with memory management in the implementation of LCLint. LCLint is over 100 000 lines of source code and incorporates code from at least three different authors employing different memory management styles. The original implementation did not attempt to deallocate memory completely, and a garbage collector was used to reclaim storage. Although this was satisfactory as a research vehicle, it had practical disadvantages and limited the number of platforms to which LCLint could be easily ported. Several earlier attempts to fix LCLint’s memory management by myself and others had failed. One frustrated person who attempted to port LCLint wrote...
...its implementation with regard to memory management is horrible. Memory is allocated willy-nilly without any way to track it or recover it. Malloced pointers are passed and assigned in a labyrinth of complex internal data structures. It becomes impossible to find...
We used the annotations and associated checking described in this paper to make substantial improvements to LCLint. Garbage collection was replaced by explicit memory deallocation, producing a more portable system with improved performance. Numerous bugs relating to null pointer dereferences, incomplete definition (usually forgetting to initialize a structure field), and aliasing were detected. Memory annotations also enabled certain efficiency improvements (such as sharing storage or using NULL to represent the empty string) that were considered too risky to attempt without them. Further, the resulting system is clearly documented with checked memory annotations. This allows maintenance changes to be made easily, and their external effects to be detected quickly.
Annotations were added in an iterative process, similar to that described in Section 6. Running LCLint on the code with no annotations produced on the order of a thousand messages. Nearly all of these messages, however, were quickly eliminated by adding an annotation or making a small change to the code (usually adding a missing free to fix a storage leak). Often, adding a single annotation on a type declaration or parameter would eliminate dozens of messages.
Since LCLint was run repeatedly on the code after changing annotations, it was important that the checking was efficient. It takes less than four minutes (on a DEC 3000/500) to check the entire program. During the later phases, checking became more modular as I focused on subtle problems in a single file. By using libraries to store interface information, a representative 5000 line module is checked in under 10 seconds.
It took a few days (split over several weeks) to add all the annotations and fix the detected problems. This compares favorably to more than a week spent previously trying to fix these problems unsuccessfully using run-time methods. For the most part, adding annotations is a fairly methodical process, and I hope future work will make it possible to automate a large portion of it.
In the course of checking, the need for the relaxed checking annotations (relnull, partial, and reldef) became apparent. There were situations where simple annotations were not expressive enough to describe constraints, so checking needed to be relaxed to eliminate spurious messages. This eliminates many messages without much effort, but it also means less checking is done and more errors may be undetected.
Some of the reported messages were considered spurious. There were 75 places where stylized comments were used to suppress messages relating to checks described in this paper. The most common problem was where different branches of an if statement used storage inconsistently. Many of these were places where the code was attempting to recover from a failed assertion or handling an error condition (e.g., a new object denoting an error is returned from a function that does not normally return only storage), so LCLint was correct in reporting an anomaly but it was not considered a bug that needed to be fixed. The remaining spurious messages resulted from places where either LCLint’s alias analysis is not good enough to handle the code correctly, LCLint’s execution flow analysis is not good enough to determine that a particular path through the code will never be taken, or where the code violates constraints imposed by the annotations in a way that I believed to be safe because of external constraints. The dangers of suppressing messages became clear when testing revealed that one of these suppressed messages indicated a real bug.
After checking was complete, I tested the program with explicit deallocation. As expected, not all memory management bugs had been detected statically. There were a few errors involving incor-
---
7 LCLint does many checks not described in this paper (and not related to dynamic memory management). Approximately 7000 lines of code are directly related to the checks described here.
rectly freeing storage resulting from pointer arithmetic, two errors resulting from freeing static storage,\(^8\) two errors resulting from missing annotations in the standard library specification, and one error involving a complex dependency on a global variable. Then, run-time tools were used to look for remaining memory leaks. Several were detected, relating to storage reachable from global and static variables that was not deallocated. Since LCLint does not do interprocedural program flow analysis, it cannot detect failures to free global storage before execution terminates.\(^9\)
8 Conclusion
In this paper, we have seen how annotations can be added to make assumptions about memory management explicit at interface points. The annotations improve program documentation, and can be used by a static checker to detect anomalies that typically indicate bugs or incorrect annotations. We were able to use this approach to fix memory allocation problems in a large program where \textit{ad hoc} and run-time checking methods had failed. Annotations and static checking made it possible to fix memory management problems in a systematic, goal-directed manner. The memory annotations were a great help in maintaining and developing code. It is easy to see the effect of a change in memory sharing by changing an annotation and running LCLint.
Static checking cannot detect all errors, and certainly does not eliminate the need for run-time checking and extensive testing. However, a combination of static checking using annotations and run-time checking and testing can help produce reliable code with less effort than traditional methods.
We do not yet have experience using this approach as a new program is developed. I suspect adding annotations while a new program is being developed would not pose a major overhead. Programmers should consider their assumptions about external constraints, and the annotations provide a convenient and precise way of documenting some of these assumptions.
Acknowledgements
I thank John Guttag for help with this research and writing this paper, Thomas Reps from the program committee for constructive comments well beyond the call of duty, Raymee Stata for reviewing a draft of this paper, and Sheryl Risacher for help with the abstract.
References
A Availability
The web home page for LCLint is
LCLint can be downloaded from
or obtained via anonymous ftp from
ftp://larch.lcs.mit.edu/pub/Larch/lclint/
Several UNIX platforms are supported and source code is available.
LCLint can also be run remotely using a form at
The example described in Section 6 is found at
http://larch-www.lcs.mit.edu/8001/larch/lclint/examples/db/
To receive announcements of new releases, send a (human-readable) message to lclint-request@larch.lcs.mit.edu.
B Memory Management Annotations
All annotations may be used in either an LCL specification or in a C source or header file preceded by \texttt{/*}. Unless excluded explicitly, annotations can be applied to a type definition, variable declaration, parameter declaration, or function return value. At most one annotation in any category can be used on a given declaration.
Null Pointers
\texttt{null} May have the value NULL.
\texttt{notnull} Not permitted to have the value NULL. This is implied if there is no annotation, but may be necessary for some declarations to override \texttt{null} in a type definition.
\texttt{relnull} Relax null checking. Value assumed to be non-NULL when it is used, but may be assigned to NULL.
**Definition**
out Referenced storage need not be defined. For parameters, this means passed memory must be allocated but not necessarily defined. For return values, it means the result is allocated but not necessarily defined.
in Referenced storage is completely defined. (Normally, this is assumed if no other definition annotation is used. Flags can be used to allow the out annotation to be assumed for unannotated parameters where it would prevent a message.)
partial Referenced storage is partially defined. No errors are reported when incompletely defined storage is transferred as a partial, and no error is reported when storage derived from a partial is used.
reidef Relax definition checking. Value assumed to be defined when it is used, but need not be assigned to defined storage.
**Allocation**
only Refers to unshared storage; confers obligation to release this storage or transfer the obligation.
keep Like only, except that the caller may safely use the reference after the call. (Function parameters only.)
temp Temporary storage. Function may not deallocate or add new external references to storage. (Function parameters only.)
owned Refers to storage that may be shared by dependent references. This reference is responsible for releasing the storage.
dependent Refers to storage that may be shared by an owned reference. This reference may not release the storage.
shared Refers to arbitrarily shared storage; may not be deallocated. (For use with garbage collectors.)
**Parameter Aliasing**
unique May not share storage with any other function parameter or accessible global. (Function parameters only.)
**Returned References**
returned A reference to the parameter may be returned. (Function parameters only.)
**Exposure**
observer Returned storage must not be modified (this disallows deallocation also) by caller. (Return values only.)
exposed Mutable returned storage from internal abstract type or passed mutable storage assigned to field of abstract type. May be modified but not deallocated. (Return values and function parameters only.)
|
{"Source-Url": "http://www.dsi.unive.it/%7Eavp/evans96static.pdf", "len_cl100k_base": 10227, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34285, "total-output-tokens": 11416, "length": "2e13", "weborganizer": {"__label__adult": 0.0003490447998046875, "__label__art_design": 0.00025272369384765625, "__label__crime_law": 0.0002980232238769531, "__label__education_jobs": 0.0003743171691894531, "__label__entertainment": 4.673004150390625e-05, "__label__fashion_beauty": 0.00014221668243408203, "__label__finance_business": 0.00012934207916259766, "__label__food_dining": 0.0003037452697753906, "__label__games": 0.00042510032653808594, "__label__hardware": 0.0008444786071777344, "__label__health": 0.00039076805114746094, "__label__history": 0.00017273426055908203, "__label__home_hobbies": 7.253885269165039e-05, "__label__industrial": 0.0002789497375488281, "__label__literature": 0.00021755695343017575, "__label__politics": 0.00022530555725097656, "__label__religion": 0.00043129920959472656, "__label__science_tech": 0.00606536865234375, "__label__social_life": 6.884336471557617e-05, "__label__software": 0.0033550262451171875, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.00027489662170410156, "__label__transportation": 0.0004498958587646485, "__label__travel": 0.00017833709716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52672, 0.01659]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52672, 0.55794]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52672, 0.89797]], "google_gemma-3-12b-it_contains_pii": [[0, 4112, false], [4112, 11571, null], [11571, 18095, null], [18095, 24971, null], [24971, 27645, null], [27645, 33214, null], [33214, 38650, null], [38650, 45174, null], [45174, 50587, null], [50587, 52672, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4112, true], [4112, 11571, null], [11571, 18095, null], [18095, 24971, null], [24971, 27645, null], [27645, 33214, null], [33214, 38650, null], [38650, 45174, null], [45174, 50587, null], [50587, 52672, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52672, null]], "pdf_page_numbers": [[0, 4112, 1], [4112, 11571, 2], [11571, 18095, 3], [18095, 24971, 4], [24971, 27645, 5], [27645, 33214, 6], [33214, 38650, 7], [38650, 45174, 8], [45174, 50587, 9], [50587, 52672, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52672, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
22e94771e57c06d80aa4305dda918ac99ac1170d
|
Understanding and Detecting Concurrency Attacks
Rui Gu*, Bo Gan*, Jason Zhao*, Yi Ning†, Heming Cui*, and Junfeng Yang*
*Columbia University †The University of Hong Kong
Abstract
Just like bugs in single-threaded programs can lead to vulnerabilities, bugs in multithreaded programs can also lead to concurrency attacks. Unfortunately, there is little quantitative data on how well existing tools can detect these attacks. This paper presents the first quantitative study on concurrency attacks and their implications on tools. Our study on 10 widely used programs reveals 26 concurrency attacks with broad threats (e.g., OS privilege escalation), and we built scripts to successfully exploit 10 attacks. Our study further reveals that, only extremely small portions of inputs and thread interleavings (or schedules) can trigger these attacks, and existing concurrency bug detectors work poorly because they lack help to identify the vulnerable inputs and schedules.
Our key insight is that the reports in existing detectors have implied moderate hints on what inputs and schedules will likely lead to attacks and what will not (e.g., benign bug reports). With this insight, this paper presents a new directed concurrency attack detection approach and its implementation, OWL. It extracts hints from the reports with static analysis, augments existing detectors by pruning out the benign inputs and schedules, and then directs detectors and its own runtime vulnerability verifiers to work on the remaining, likely vulnerable inputs and schedules.
Evaluation shows that OWL reduced 94.3% reports caused by benign inputs or schedules and detected 7 known concurrency attacks. OWL also detected 3 previously unknown concurrency attacks, including a use-after-free attack in SSDB confirmed as CVE-2016-1000324, an integer overflow, HTML integrity violation in Apache and three new MySQL data races confirmed with bug IDs 84064, 84122, 84241. All OWL source code, exploit scripts, and results are available at https://github.com/ruigulala/ConAnalysis.
1 Introduction
Multithreaded programs are already prevalent. However, despite much effort, these programs are still notoriously difficult to get right. Concurrency bugs (i.e., shared memory accesses without proper synchronization among threads) in these programs have led to severe consequences, including memory corruption, wrong outputs, and program crashes [52].
Worse, a prior study [85] shows that many real-world concurrency bugs can lead to concurrency attacks: once a concurrency bug is triggered, attackers can leverage the memory corrupted by this bug to construct various violations, including privilege escalations [7, 10], malicious code injections [8], and bypassing security authentications [4, 3, 5]. For instance, a data race [7] in the Linux kernel corrupted the kernel’s memory management subsystem and led to a root privilege escalation. This study also elaborates that, because concurrency attacks are caused by concurrent, miscellaneous memory accesses, they can weaken most traditional defense techniques (e.g., TOCTOU [74, 79, 73]). However, this study did not provide exploit scripts to trigger these attacks, nor it quantitatively evaluated existing tools on these attacks.
This paper presents the first quantitative study on the severity of concurrency attacks. We studied 10 widely used programs, including 3 server programs, 2 browsers, 1 library, and 4 kernel distributions, in CVE and their own bug databases. Our study reveals 26 concurrency attacks that consist of three more types of violations than the prior study [85], including HTML integrity violations (§8.4), buffer overflows (§4.3), and DoS attacks (§8.4). We built scripts to successfully exploit 10 attacks, and we quantitatively studied these attacks with their input patterns, bug patterns in code (if available), and the efficacy of existing detection tools.
Our study makes four new findings. First, concurrency attacks are much more severe than concurrency bugs. Specifically, once concurrency attacks succeed, fixing only concurrency bugs in code won’t help, because attackers may have broken in [6, 7, 10]. This finding suggests that analyzing the known, fixed concurrency bugs is still crucial, because they may have led to concurrency attacks that remain latent.
Second, unlike previous observations in consequence analysis tools [88, 47, 37] that software bugs are often close to their failure/error sites (e.g., bugs and failures are within the same function), our study shows that 12 out of 27 concurrency attacks are widely spread across different functions from their
bugs. Therefore, these consequence analysis tools may be insufficient to detect such concurrency attacks.
Third, although concurrency attacks can cause miscellaneous consequences, these consequences are triggered by several explicit types of vulnerable sites (e.g., setuid()). Moreover, although concurrency bugs and their attack sites spread across different functions, at runtime, the bugs and their attacks often share similar call stack prefixes (§3.2). This finding reveals an opportunity to build a precise, scalable static analysis tool for tracking the bug-to-attack propagation.
Fourth, concurrency bugs and their attacks can often be easily triggered with different, subtle program inputs. Consider only the inputs to trigger concurrency bugs, 8 out of the 10 triggered attacks required less than 20 repetitive executions via subtle inputs. This finding not only contradicts traditional understanding (e.g., [60, 26]) that concurrency bugs are difficult to trigger in native executions and require tremendous retrials, but it also implies that these attacks can easily bypass existing anomaly detection tools [65, 27, 39].
Moreover, triggering concurrency bugs and their attacks often need different inputs. In a Linux root privilege escalation [7], although triggering the data race only required calling the uselib() and mmap() system calls, other system calls were also needed to get the root shell. This finding poses a significant issue to existing concurrency tools, including model checking tools (e.g., Chess [54]), because they are designed only to catch the race on one input and have no clue on its security consequence that needs another input.
To precisely quantify the efficacy of existing concurrency detection tools on concurrency attacks, we selected two popular dynamic data race detectors TSAN [69] and SKI [32], and we made 6 studied programs run with these tools. We found that most of the tools’ reports were benign races, and all the concurrency bugs that can lead to the 10 concurrency attacks we found have been deeply buried in these tools’ reports. Our evaluation found 94.3% of race reports benign (§8.2).
Two main reasons make these race detectors ineffective. First, only an extremely small portion of program inputs can lead to concurrency bugs and their attacks. Because race detectors are clueless on even which input will lead to a harmful bug (vulnerable bugs are just a subset of harmful bugs), these detectors can only blindly flag bug reports driven by testing workloads and search “in the dark”.
Second, even if a bug-triggering input is identified, a program may still run into too many thread interleavings (or schedules), depending on runtime effects (e.g., hardware timings) and synchronization implementations (e.g., adhoc synchronizations [83]). Only a very small portion of schedules will trigger the bug, while the rest may generate excessive, benign reports. For instance, we ran MySQL with TSAN and repeatedly generated the same bug-triggering SQL query [10]. We got 202 race reports, but after our manual inspection, only two reports will lead to attacks (§3); most benign reports were caused by MySQL’s adhoc synchronizations or benign schedules. In sum, the excessive inputs and schedules caused excessive reports and buried real bugs and attacks.
It’s challenging for existing analysis techniques to accurately pinpoint the potentially vulnerable inputs and schedules. One common technique to detect software bugs is static analysis because it can thoroughly analyze program code and identify what branch statements may be controlled by inputs and may lead to bugs. However, because it lacks runtime effects such as which inputs may take which side of a branch statement, static analysis will typically generate many more false reports than the two dynamic race detectors we ran.
Our key insight is that the reports from existing detectors have implied moderate hints on what inputs and schedules will likely lead to attacks and what will not (e.g., benign bugs). We identify two types of hints. The first hint is benign schedules. For instance, the benign reports caused by adhoc synchronizations have already implied how these synchronizations act and how they work out schedules. Therefore, we can use static analysis to extract these synchronizations from the reports, automatically annotate these synchronizations in a program, then we can greatly prune out these benign schedules and their reports. Our analysis automatically identified 22 unique static adhoc synchronizations (§8.2).
The second hint is the bug-to-attack propagations, which imply vulnerable inputs. Our study found that most vulnerable races are already included in the race detectors’ reports (§3.1), and concurrency attacks sites are often explicit in program code (§3.2). Therefore, we can perform static analysis on only the data and control flow propagations between the bug reports and the potential attack sites, then we can collect relevant call stacks and branch statements as the potentially vulnerable input hints.
We did not make this vulnerable input hint automatically generate concrete inputs (can be done via symbolic execution [19, 63]), because we found the call stacks and branches in hints are already expressive enough for us to manually infer vulnerable inputs (§4.3). This input hint helped us identify subtle inputs to trigger both known and unknown attacks (§8.4).
In sum, by directing concurrency bug detectors to focus on the potentially vulnerable inputs and schedules, we can greatly augment existing detectors to approach and detect concurrency attacks. We implemented this directed concurrency attack detection approach in OWL. It first runs concurrency bug detectors on a program to generate reports, and it extracts benign schedule hints (e.g., adhoc synchronizations) and vulnerable input hints from the reports with static analysis. It then automatically annotates the benign schedule hints in a program’s code, greatly reducing benign schedules and thus their reports. Finally, it directs detectors and its own runtime vulnerability verifiers (§4.2) to work on the remaining, likely vulnerable inputs and schedules.
We evaluated OWL on 6 diverse, widely used programs, including Apache, Chrome, Libsafe, Linux kernel, MySQL,
This paper makes two major contributions:
1. A first quantitative study on concurrency attacks and their implications on detection tools. This study will motivate and guide researchers to develop new tools to detect and defend against concurrency attacks (§3).
2. A new directed concurrency attack detection approach and its implementation, Owl. Owl can be applied in existing security tools to bridge the gap between concurrency bugs and their security consequences (§7.2).
The rest of this paper is structured as follows. §2 introduces concurrency attack background, and §3 presents our quantitative study. §4 gives an overview of our Owl framework. §5 describes Owl’s schedule reduction and §6 its input reduction techniques. §7 discusses Owl’s limitations and broad applications. §8 presents our evaluation results for Owl, §9 discusses related work, and §10 concludes.
2 Background
A prior study [85] browsed the bug databases of 46 real-world concurrency bugs and made three major findings on concurrency attacks. First, concurrency attacks are severe threats: 35 of the bugs can corrupt critical memory and cause three types of violations, including privilege escalations [7, 10], malicious code injections [8], and bypassing security authentications [4, 3, 5].
Second, concurrency bugs and attacks can often be easily triggered via subtle program inputs. For instance, attackers can use inputs to control the physical timings of disk I/O and program loops. They can manipulate concurrency bugs with a small number of re-executions. Third, compared to traditional TOCTOU attacks, which stem from corrupted file accesses, handling concurrency attacks is much more difficult because they stem from corrupted, miscellaneous memory accesses.
These three findings reveal that concurrency attacks can largely weaken or even bypass most existing security defense tools, because these tools are mainly designed for sequential attacks. For instance, consider taint tracking tools, concurrency attacks can corrupt the metadata fields in these tools and completely bypass taint tracking. Anomaly detection tools, which rely on inferring adversary program behaviors (e.g., excessive re-executions), become ineffective, because concurrency attacks can easily manifest via subtle inputs.
This prior study raises an open research question: what will be an effective tool for detecting concurrency attacks? Specifically, can existing concurrency bugs detection tools effectively detect these bugs and their attacks? The answer is probably “NO” because literature has overlooked these attacks.
3 Quantitative Concurrency Attack Study
We studied concurrency attacks in 10 widely used programs, including 3 servers, 2 browsers, 1 library, and 4 kernel distributions. We added the shared memory concurrency bugs in the prior study [85], and we searched “concurrency bug vulnerability” in CVE and these programs’ bug databases. We manually inspected bug reports and removed them if they were false reports or lack a clear description, and we conservatively kept the vulnerable ones caused by multithreading.
Unlike the prior study [85] which counted the number of security consequences in bug reports as the number of concurrency attacks, we counted only each bug’s first security consequence. In total we collected 26 concurrency attacks with three more types of violations than the prior study [85], including HTML integrity violations (§8.4), buffer overflows (§4.3), and DoS attacks (§8.4). We built scripts to successfully exploit 10 attacks in 6 programs if we had source code.
To quantitatively analyze why concurrency attacks are overlooked, we considered data race detectors because they have effectively found concurrency bugs. We selected two popular tools: TSAN [69] for applications and SKI [32] for OS kernels. We ran the two tools on 6 programs that support these tools. We used the programs’ common performance benchmarks as workloads. Table 1 shows a study summary.
<table>
<thead>
<tr>
<th>Name</th>
<th>LoC</th>
<th># Concurrency attacks</th>
<th># Race reports</th>
</tr>
</thead>
<tbody>
<tr>
<td>Apache</td>
<td>290K</td>
<td>4</td>
<td>715</td>
</tr>
<tr>
<td>MySQL</td>
<td>1.5M</td>
<td>2</td>
<td>1123</td>
</tr>
<tr>
<td>SSDB</td>
<td>67K</td>
<td>1</td>
<td>12</td>
</tr>
<tr>
<td>Chrome</td>
<td>3.4M</td>
<td>3</td>
<td>1715</td>
</tr>
<tr>
<td>IE</td>
<td>N/A</td>
<td>1</td>
<td>N/A</td>
</tr>
<tr>
<td>Libsafe</td>
<td>3.4K</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>Linux</td>
<td>2.8M</td>
<td>8</td>
<td>24641</td>
</tr>
<tr>
<td>Darwin</td>
<td>N/A</td>
<td>3</td>
<td>N/A</td>
</tr>
<tr>
<td>FreeBSD</td>
<td>680K</td>
<td>2</td>
<td>N/A</td>
</tr>
<tr>
<td>Windows</td>
<td>680K</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>Total</td>
<td>8.0M</td>
<td>26</td>
<td>28209</td>
</tr>
</tbody>
</table>
Table 1: Concurrency attacks study results. This table contains both known and previously unknown concurrency attacks we detected. We made 6 out of 10 programs run with race detectors. We built exploit scripts for 10 concurrency attacks in these 6 programs.
3.1 Challenging Findings
I: Concurrency attacks are much more severe than concurrency bugs. Every studied program has concurrency attacks. Figure 1 shows a concurrency attack that bypassed stack overflow checks in the Libsafe [48] library and injected malicious code. Figure 2 shows a concurrency attack in the Linux uselib() system call. Attackers have leveraged this bug to trigger a NULL pointer dereference in the kernel and execute arbitrary code from user space.
One key difference between concurrency attacks and concurrency bugs is that fixing the buggy code is not sufficient to fix the vulnerabilities. For instance, once attackers have got OS root privileges [6, 7], they may stay forever in the system.
Therefore, it’s still crucial to study whether existing known concurrency bugs may lead to concurrency attacks.
Figure 1: A concurrency attack in the Libsafe security library. Dotted arrows mean the bug-triggering thread interleaving. When Thread 2 detects a stack overflow attack, it sets the dying variable to 1 and kills current process shortly. However, access to dying is not correctly protected by mutex, so Thread 1 reads this variable, bypasses the security check in stack_check() (called at line 164), and runs into a stack overflow in strcpy() (at line 165).
Figure 2: A concurrency attack in the Linux uselib() and msync() system calls. Dotted arrows mean the bug-triggering thread interleaving. A data race on the f_op struct causes the Linux kernel to trigger a NULL pointer dereference and enables arbitrary code execution.
II: Concurrency bugs and their attacks are widely spread in program code. Among 10 attacks we had source code and constructed exploit scripts, 7 have their bugs and vulnerability sites among different functions. Moreover, bugs often affect vulnerability sites not only through data flows but also control flows (e.g., the Libsafe attack in Figure 1).
This finding suggests that a concurrency attack detection tool should incorporate both inter-procedural and control-flow analyses. Unfortunately, to scale to large programs, existing bug consequence analysis tools (e.g., [88, 84, 49]) lack either inter-procedural or control-flow analysis.
III: Concurrency bugs and their attacks are often triggered by separate, subtle program inputs. Consider the inputs to trigger concurrency bugs, unlike previous understanding [60, 26] that triggering concurrency bugs require intensive repeated executions, 8 out of the 10 reproduced concurrency attacks in our study can be easily triggered with less than 20 repetitive executions on our evaluation machines with carefully chosen program inputs. For instance, in a MySQL privilege escalation [10], we used the “flush privileges;” query to trigger a data race and corrupted another MySQL user’s privilege table with only 18 repeated executions.
In addition to input values, carefully crafted input timings can also expand the vulnerable window [85] which increases the chance of running into the bug-triggering schedules. For instance, consider Figure 2, since the if statement and the file->f_op->fsync() statement in msync_interval() have an IO operation (not shown) in between, attackers could craft inputs with subtle timings for this IO operation and thus enlarged the time window of these two statements. Then, attackers could easily trigger the buggy schedule in Figure 2.
In addition to the inputs for triggering concurrency bugs, triggering the attacks of these bugs often require other subtle program inputs. A main reason is the bugs and their attacks are widely spread in program code and thus they may easily be affected by different inputs. In a Linux uselib() data race [7], we needed to carefully construct kernel swap IO operations to trigger the race, and we needed to call extra system calls to get a root shell out of this race. By constructing subtle inputs for both the bug and its attack, we needed only tens of repeated executions to get this root shell on our evaluation machines.
This uselib() attack reveals two issues. First, a small number of repeated executions indicates that attackers can easily bypass anomaly detection tools [65, 27, 39] with subtle inputs. Second, existing data race detectors are ineffective at revealing this attack because they will stop after they run a bug-triggering input and flag this race report. Such a one-shot detection will overlook a concurrency attack as it often requires extra inputs to trigger the attack. Therefore, extra analysis is required to identify the bug-to-attack propagation.
IV: Most concurrency bugs that triggered concurrency attacks can be detected by race detection tools. There are several types of concurrency bugs, including data race, atomicity violation, and order violation [52]. Although some types of concurrency bugs are difficult to detect (e.g., order violation), we found that all concurrency bugs we studied were data races and these bugs can readily be detected by TSAN or SKI. This finding suggests that a race detector is a necessary component for detecting concurrency attacks.
V: Concurrency attacks are overlooked mainly due to the excessive reports from race detectors. We identified two major reasons for this finding. First, existing race detectors generate too many bug reports which deeply bury the vulnerable ones. For instance, we ran MySQL with TSAN and repeatedly generated the same bug-triggering SQL query [10]. We got 202 race reports, but after our manual inspection, only two reports will lead to attacks. Table 1 shows more programs with even more reports. These excessive reports make finding concurrency attacks from the reports just like “finding needles in a haystack.”
Second, even if a developer luckily opens a true bug report that can actually lead to an attack, she still has no clue whether what attacks the bug may lead to, because the report only shows the bug itself (e.g., the corrupted variable), but
not its security consequences. Therefore, it’s crucial to have an analysis tool that can accurately identify the bug-to-attack propagation for bug reports.
3.2 Optimistic Findings
To assist the construction of a practical concurrency attack detection tool, we identified two common patterns for concurrency attacks. First, although the consequences of concurrency attacks are miscellaneous, these consequences are triggered by five explicit types of vulnerable sites, including memory operations (e.g., strcpy()), NULL pointer dereferences, privilege operations (e.g., setuid()), file operations (e.g., access()), and process-forking operations (e.g., eval() in shell scripts). Our study found that these vulnerable sites have independent consequences to each other, thus more types can be easily added.
Second, concurrency bugs and their attacks often share similar call stack prefixes. From the 10 concurrency attacks with source code, 7 of them have the vulnerability site in the callees (i.e., the call stack of the bug is a prefix of the call stack of the vulnerability site). For the rest them, the vulnerability site is just one or two levels up of the bug’s call stack. These two patterns reveal an opportunity to build a precise, scalable static analysis tool for tracking the bug-to-attack propagation.
4 OWL Overview
This section first presents a major challenge on realizing the directed concurrency attack detection approach (§4.1), gives an overview of OWL’s architecture with main components and workflow (§4.2), and then gives an example to show how it works (§4.3).
4.1 Challenge: Accuracy vs. Scalability
A crucial component for OWL is a good bug-to-attack analysis, but it is technically challenging to make this analysis both accurate (report few false reports and miss few real bugs) and scalable (work with large programs). As mentioned in §1, static analyses are often easy to be scalable as they can see what a compiler can see, but because of the lack of runtime effects (e.g., functions being executed and branches taken), they suffer excessive false reports (e.g., 84% reports in RE-LAY [75] were false reports).
To better identify runtime effects, symbolic execution [19] systematically explores branches and leverages the program paths to identify buggy inputs. However, this technique is notoriously difficult to scale to large programs (e.g., Apache and Linux kernel) because these programs typically have too many functions and program paths [25].
Alternatively, dynamic analyses can accurately capture runtime effects (e.g., [60, 13]), but they can analyze only the executed program path with the exact input and schedule. If a concurrency bug’s attack requires another input to trigger in another program path, dynamic analyses may miss the attack.
Fortunately, this challenge can be mitigated via the two optimistic patterns (§3.2) in our study: concurrency attack sites are explicit, and bugs and their attacks often share similar call stack prefixes. These patterns imply that a concurrency bug only affects a small portion of functions and program paths (and thus a small portion of inputs). Thus, we can combine the attack sites (static effects) and calls stacks in reports (dynamic effects), then our static analysis can skip analyzing many functions and program paths that do not comply with these effects, making OWL reasonably accurate and scalable.
4.2 OWL Architecture
Figure 3 presents OWL’s architecture with five key components: the concurrency error detector, the static adhoc synchronization detector, dynamic race verifier, static vulnerability detector and dynamic vulnerability verifier.
OWL’s work as follows. (1) A concurrency bug detector first detects bugs for the given program inputs. (2) Then, based on the detected results, OWL’s adhoc synchronization detector analyzes the reports and LLVM bitcode to find adhoc synchronizations. After obtaining the adhoc synchronization locations, OWL automatically annotates program source code with TSAN markups and re-runs the detector. (3) Then, OWL passes the re-generated bug reports to its race verifier to check whether bugs will actually occur. (4) OWL’s static vulnerability analyzer conducts a forward data & control flow analysis to identify potential bug-to-attack propagations as vulnerable input hints. (5) Eventually, OWL’s vulnerability verifier re-runs the program and checks whether an attack can actually be realized.
4.3 Example
Figure 1 shows a concurrency attack in Libsafe, a user-level security library which dynamically intercepts all the Libc memory functions to detect buffer overflows. When Libsafe detects a buffer overflow, it sets a global variable dying to 1 to indicate that current process will be killed shortly, and then it kills the program. If this variable is set, Libsafe will stop performing security checks on memory functions. Unfortunately, there is a data race on dying because access to this variable is not protected by mutex. Therefore, between the moment dying is set and the moment the entire process is killed, a thread calling memory functions in this process may leverage the race on dying to bypass buffer overflow checks.
We have constructed a C program with Libsafe to trigger this race, bypassed a stack overflow check for a vulnerability site, strcpy(), and gotten a shell by injecting our own malicious code. Note that in this attack, the race and the vulnerability site are in different functions, and the race affects the vulnerability site through an if control-dependency at line 164. Existing consequence analysis tools [88, 84, 49] are inadequate to detect this attack because they lack either interprocedural or control-flow analysis.
OWL started from the detection of the data races between
Developers use semaphore-like adhoc synchronizations, where one thread is busy waiting on a shared variable until another thread sets this variable to be “true”. This type of adhoc synchronizations couldn’t be recognized by TSAN or SKI and caused many false positives.
Our vulnerability analyzer starts from the “read” side call stack of the race shown in Figure 4 and conducts an inter-procedural static analysis to detect which vulnerability site may be affected by this race by tracking data and control flows.
As shown in Figure 5, OWL reported one memory operation at line 165 as a vulnerable operation. Our vulnerability report includes the reasoning (a dangerous function will be control-dependent on the corrupted branch statement at line 164) and what are the branches to reach the vulnerability operation. Our report indicates that a strcpy() function will be called with the original parameters without the actual security checks in stack_check(). At the end, our vulnerability verifier verifies this vulnerability by re-running the program and satisfies the branches to eventually trigger the vulnerability. In order to satisfy the branches, our vulnerability verifier requires user intervention to decide the execution order of the racing instructions and input running.
5 Reducing Benign Schedules
This section presents OWL’s benign schedule reduction component, including automatically annotating adhoc synchronizations (§5.1) and pruning benign schedules (§5.2). This component in total greatly reduced 94.3% of the total reports (see §8.2).
5.1 Annotating Adhoc Synchronization
Developers use semaphore-like adhoc synchronizations, where one thread is busy waiting on a shared variable until another thread sets this variable to be “true”. This type of adhoc synchronizations couldn’t be recognized by TSAN or SKI and caused many false positives.
OWL uses static analysis to detect these synchronizations in two steps. First, by taking the race reports from detectors, it sees if the “read” instruction is in a loop. Then, it conducts a intra-procedural forward data and control dependency analysis to find the propagation of the corrupted variable. If OWL encounters a branch instruction in the propagation chain, it checks if this branch instruction can break out of the loop. Last, it checks if the “write” instruction of the instruction assigns a constant to the variable. If so, OWL tags this report as an “adhoc sync”.
Compared to the prior static adhoc sync identification method SyncFinder [83], which finds the matching “read” and “write” instruction by statically searching program code, our approach leverages the actual runtime information from the race reports, so ours are much simpler and more precise.
5.2 Verifying Real Data Races
OWL’s dynamic race verifier checks whether the reduced race reports are indeed real races. It also generates security hints for the following analysis. The verifier is lightweight because it is built on top of the LLDB debugger. We find that a good way to trigger a data race is to catch it “in the racing moment”. The verifier sets thread specific breakpoints indicated by TSAN race reports. “Thread specific” means when the breakpoint is triggered, we only halt that specific thread instead of the whole program. The rest of the threads are still able to run. In this way, we can actually catch the race when both of the racing instructions are reached by different threads and are accessing the same address.
For each run, OWL’s dynamic filter verifies one race. Once a data race is verified, the verifier goes one step further. It prints the following dynamic information as security hints including, the racing instructions from source code, the value they’re about to read and write and the type of the variable that these instructions are about to read or write. These hints show whether a NULL pointer difference can be triggered or an uninitialized data can be read because of the race.
It is possible that due to the suspension of threads, the program goes into a livelock state before verifying any data races. We resolve this livelock state by temporarily releasing one of the currently triggered breakpoints.
Previous works [66, 58, 36] adopt the same core idea of thread specific breakpoints and data race verification. OWL’s dynamic race verifier provides a lightweight, general, easy to use way (integrated with existing debugger) in verifying potentially harmful data races and their consequences. Compared with RaceFuzzer [66], OWL’s verifier achieves the goal without requiring heavyweight Java instrumentation. Com-
pared with ConcurrentBreakpoint [58] and ConcurrentPredicate [36], we require no code annotations and importing libraries.
Overall, OWL’s dynamic filter makes developers less dependent on the particular front end race detector, because no matter how many false positive the front end race detector generates, this verifier will make sure the end result is accurate.
There are two cases that could cause OWL’s race verifier to miss real races. First, if the race detector doesn’t detect the race upfront, the verifier won’t report the race either. Second, depending on runtime effects (e.g., schedules), some races can’t be reliably reproduced with 100% success rate [36].
6 Computing Vulnerable Input Hints
This section presents the algorithm of OWL’s static vulnerability analysis (§6.1) and dynamic verifier (§6.2). Since the input of the static analysis is the reports from concurrency bug detectors, this section then describes how OWL integrates this analysis with two existing race detectors (§6.3).
6.1 Analysis Algorithm
Algorithm 1 shows OWL’s vulnerability analyzer’s algorithm. It takes a program’s LLVM bitcode in SSA form, an LLVM load instruction that reads from the corrupted memory of a bug report, and the call stack of this instruction. The algorithm then does inter-procedural static analysis to see whether corrupted memory may propagate to any vulnerable site (§3.2) through data or control flows. If so, the algorithm outputs the propagation chain in LLVM IR format as the vulnerable input hint for developers.
The algorithm works as follows. It first adds the corrupted read instruction into a global corrupted instruction set, it then traverses all following instructions in the current function and if any instruction is affected by this corrupted set (“affected” means any operand of current instruction is in this set), it adds the instruction into this corrupted set. The algorithm looks into all successors of branch instructions as well as callees to propagate this set. It reports a potential concurrency attack when a vulnerable site (§3.2) is affected by this set.
To achieve reasonable accuracy and scalability, we made three design decisions. First, based on our finding that bugs and attacks often share similar call stack prefixes, the algorithm traverses the bug’s call stack (§4.1). If the algorithm does not find a vulnerability on current call stack and its callees, it pops the latest caller in current call stack and checks the propagation through the return value of this call, until the call stack becomes empty and the traversal of current function finishes. This targeted traversal makes the algorithm scale to large programs with greatly reduced false reports (Table 3).
Second, the algorithm tracks propagation through LLVM virtual registers [50]. Similar to relevant systems [88, 43], our design did not incorporate pointer analysis [81, 46] because one main issue of such analysis is that it typically reports too many false positives on shared memory access in large programs (§7.1).
Our analyzer compensates the lack of pointer analysis by: (1) tracking read instructions in the detectors at runtime (§6.3), and (2) leveraging the call stacks to precisely resolve the actually invoked function pointers (another main issue in pointer analysis).
Third, some detectors do not have read instructions in the reports (e.g., write-write races), and we modified the detectors to add the first load instruction for these reports during the detection runs (§6.3).
All five types of vulnerability sites we found (§3.2) have been incorporated in this algorithm. The generated vulnera-
```
<table>
<thead>
<tr>
<th>Algorithm 1: Vulnerable input hint analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Input</strong>: program <code>prog</code>, start instruction <code>si</code>, <code>si</code> call stack <code>cs</code></td>
</tr>
<tr>
<td><strong>Global</strong>: corrupted instruction set <code>crptIns</code>, vulnerability set <code>vuls</code></td>
</tr>
<tr>
<td><strong>DetectAttack</strong>(<code>prog</code>, <code>si</code>, <code>cs</code>)</td>
</tr>
<tr>
<td><code>crptIns.add si</code></td>
</tr>
<tr>
<td>while <code>cs</code> is not empty do</td>
</tr>
<tr>
<td><code>function ← cs.pop</code></td>
</tr>
<tr>
<td><code>ctrlDep ← false</code></td>
</tr>
<tr>
<td><code>DoDetect(prog, si, function, ctrlDep)</code></td>
</tr>
<tr>
<td><strong>DoDetect</strong>(<code>prog</code>, <code>si</code>, <code>function</code>, <code>ctrlDep</code>) set <code>localCrptBrs ← empty</code></td>
</tr>
<tr>
<td>foreach succeeded instruction <code>i</code> do</td>
</tr>
<tr>
<td><code>bool ctrlDepFlag ← false</code></td>
</tr>
<tr>
<td>foreach branch instruction <code>cbr</code> in <code>localCrptBrs</code> do</td>
</tr>
<tr>
<td>if <code>i</code> is control dependent on <code>cbr</code> then</td>
</tr>
<tr>
<td><code>ctrlDepFlag ← true</code></td>
</tr>
<tr>
<td>if <code>ctrlDep</code> or <code>ctrlDepFlag</code> then</td>
</tr>
<tr>
<td>if <code>i.type() ∈ vuls</code> then</td>
</tr>
<tr>
<td><code>ReportExploit(i, CTRL_DEP)</code></td>
</tr>
<tr>
<td>if <code>i.isCall()</code> then</td>
</tr>
<tr>
<td>foreach actual argument <code>arg</code> in <code>i</code> do</td>
</tr>
<tr>
<td>if <code>arg ∈ crptIns</code> then</td>
</tr>
<tr>
<td><code>crptIns.add i</code></td>
</tr>
<tr>
<td>if <code>i.type() ∈ vuls</code> then</td>
</tr>
<tr>
<td><code>ReportExploit(i, DATA_DEP)</code></td>
</tr>
<tr>
<td>if <code>f.isInternal()</code> then</td>
</tr>
<tr>
<td><code>cs.push f</code></td>
</tr>
<tr>
<td><code>DoDetect(prog, f.first(), f, ctrlDepFlag)</code></td>
</tr>
<tr>
<td><code>cs.pop</code></td>
</tr>
<tr>
<td>else</td>
</tr>
<tr>
<td>foreach operand <code>op</code> in <code>i</code> do</td>
</tr>
<tr>
<td>if <code>op ∈ crptIns</code> then</td>
</tr>
<tr>
<td>if <code>i.type() ∈ vuls</code> then</td>
</tr>
<tr>
<td><code>ReportExploit(i, DATA_DEP)</code></td>
</tr>
<tr>
<td><code>crptIns.add i</code></td>
</tr>
<tr>
<td>if <code>i.isBranch()</code> then</td>
</tr>
<tr>
<td><code>localCrptBrs.add i</code></td>
</tr>
<tr>
<td><strong>ReportExploit</strong>(<code>i</code>, <code>type</code>)</td>
</tr>
<tr>
<td>if <code>i</code> is never reported on <code>type</code> then</td>
</tr>
<tr>
<td><code>ReportToDeveloper()</code></td>
</tr>
</tbody>
</table>
```
ability reaching branches from this algorithm serve as vulnerable input hints and helped us identify subtle inputs to detect 7 known attacks and 3 previously unknown ones (§8.4).
6.2 Dynamic Vulnerability Verifier
Owl’s dynamic vulnerability verifier is built on LLDB so it is lightweight. It takes the input from its static vulnerability analysis, including the vulnerability site and the associated branches. It re-runs the program again and prints out whether one could reach the vulnerability site and trigger the attack. If the site cannot be reached, it prints out the diverged branches as further input hints.
6.3 Integration with Concurrency Bug Detectors
Owl has integrated two popular race detectors: Ski for Linux kernels and TSAN for application programs. To integrate Owl’s algorithm (§6.1) with concurrency bug detectors, two elements are necessary from the detectors: the load instruction that reads the bug’s corrupted memory and the instruction’s call stack.
Ski’s default detection policy is inadequate to our tool because it only reports the pair of instructions at the moment when race happens. This policy incurs two issues for our integration. First, the pair of instructions could both be write instructions, which does not match the algorithm’s input format. Second, it is essential to provide to the algorithm an as detailed call stack, which reads from the corrupted racy variable, as possible.
We modified Ski’s race detection policy as follows. After a race happens, the physical memory address of the variable will be added to a Ski watch list, marking such variable as corrupted. All the call stacks of the following read to the watched variable will be printed. If a write to a watched variable occurs, such write sanitizes the corrupt value and removes the variable from the watch list. In this way, we can catch all the call stacks of potential problematic use of racy variables. The final race report will show all the stacks of the reading thread.
Another issue for Owl to work with kernels is that Ski lacks call stack information. We configure Linux kernel with the CONFIG_FRAME_POINTER option enabled. Given a dump of the kernel stack and the values of the program counter and frame pointer, we were able to iterate the stack frames and constructed call stacks.
7 Discussions
This section discusses Owl’s limitations (§7.1) and its broad applications (§7.2).
7.1 Limitations
Owl’s main design goal is to achieve reasonable accuracy and scalability, and it trades off soundness (i.e., do not miss any bugs), although Owl did not missed the evaluated attacks (§8.3). A typical way to ensure soundness is to plug in a sound alias analysis tool (e.g., [81, 46]) to identify all LLVM load and store instructions that may access the same memory. However, typical alias analyses are known to be inaccurate (e.g., too many false positives).
Owl currently handles five types of regular vulnerability operations (§3.2), and it requires these operations to exist in the LLVM bitcode. These five types of operations are sufficient to cover all 10 concurrency attacks we have reproduced, and more types can be added. If developers are concerned about some library code that may contain vulnerabilities, they should compile this code into the bitcode for Owl.
Owl’s consequence analysis tool integrates the call stack of a concurrency bug to direct static analysis toward vulnerable program paths, but Owl’s vulnerable input hints (§6.1) may contain false reports (e.g., the outcomes in Owl’s collected bug-to-attack propagation branches may have conflicts). Developers can inspect the propagation chains and refine their program inputs to validate the outcomes. In our evaluation, we found these input hints expressive as they helped us identify subtle inputs for real attacks (§8.3).
7.2 Owl has Broad Applications
We envision two immediate applications for Owl’s techniques. First, Owl can augment existing defense tools on concurrency attacks. For instance, we can leverage anomaly detection [65, 27, 39] and intrusion detection [40, 76, 77] tools to audit only the vulnerable program paths identified by Owl. Then, these runtime detection tools can greatly reduce the amount of program paths that need to be audited and improve performance. Owl can also integrate with other bug detection tools (e.g., process races [45] and atomicity bugs [59]) to detect concurrency attacks caused by such bugs.
Second, Owl’s consequence analysis tool has the potential to detect various consequences of software bugs. Software bugs have caused many extreme disasters [11, 12] in the last few decades, including losing big money and taking lives. By adding new vulnerability and failure sites of such consequences, Owl can be applied to flagging bugs that can cause severe consequences among enormous bug reports.
8 Evaluation
We evaluated Owl on 6 widely used C/C++ programs, including three server applications (Apache [14] web server, MySQL [9] database server, and SSDB [72] key-value store server), one library (Libsafe [48]), the Linux kernel, and one web browser (Chrome). We used the programs’ common performance benchmarks as workloads. Our evaluation was done on a 3.60 GHz 8-core Intel Xeon machine with 32 GB memory and 1TB SSD running Linux 3.19.0-49.
We focused our evaluation on four key questions:
1. Is Owl easy to use (§8.1)?
2. How many false reports from concurrency error detection tools can Owl reduce (§8.2)?
3. Can Owl detect known concurrency attacks in the real-world (§8.3)?
4. Can OWL detect previously unknown concurrency attacks in the real-world (§8.4)?
<table>
<thead>
<tr>
<th>Name</th>
<th>LoC</th>
<th># atks</th>
<th># atks found</th>
<th># OWL’s reports</th>
</tr>
</thead>
<tbody>
<tr>
<td>Apache</td>
<td>290K</td>
<td>3</td>
<td>3</td>
<td>10</td>
</tr>
<tr>
<td>Chrome</td>
<td>3.4M</td>
<td>1</td>
<td>1</td>
<td>115</td>
</tr>
<tr>
<td>Libsafe</td>
<td>3.4K</td>
<td>1</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>Linux</td>
<td>2.8M</td>
<td>2</td>
<td>2</td>
<td>34</td>
</tr>
<tr>
<td>MySQL</td>
<td>1.5M</td>
<td>2</td>
<td>2</td>
<td>16</td>
</tr>
<tr>
<td>SSDB</td>
<td>67K</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>Total</td>
<td>5.36M</td>
<td>11</td>
<td>10</td>
<td>180</td>
</tr>
</tbody>
</table>
Table 2: OWL concurrency attack detection results. We selected 10 concurrency attacks because we were able to trigger their bugs on our machines. OWL detected all 10 evaluated concurrency attacks.
8.1 Ease of Use
Table 2 shows a summary of our concurrency attack detection results. Overall, OWL was able to automatically run the evaluated applications and generate verified concurrency attacks with moderate developer intervention (inspect vulnerable inputs from input hints).
It’s critical to report the connection between a concurrency bug and its vulnerability. OWL provides an expressive and helpful reports to (1) let developers know why certain places are vulnerable due to the concurrency bug (2) find the right inputs to trigger concurrency attacks easily. Figure 5 shows a snippet of OWL’s report on the Libsafe attack in Figure 1.
8.2 Reducing False Reports from Detectors
Table 3 shows OWL’s race report reduction results. The second column indicates the number of raw reports generated by our race detector. The third column shows how many adhoc synchronizations we found. The fourth column shows how many reports our dynamic race verifier had removed. The fifth column shows the number of the remaining reports.
Overall, OWL is able to prune 94% cases of false positives in Linux kernel and 97.7% for the other applications. This significant reduction will help developers save much diagnostic time. The performance of OWL’s static analysis tool is critical because OWL aims to be scalable to large programs. The last column of Table 3 shows the average time cost of OWL’s static analysis tool per bug report. Overall, except for Linux kernel and Chrome, OWL’s analysis finished analyzing each program’s bug reports within a few hours.
8.3 Detecting Known Attacks
We applied OWL on 7 concurrency attacks listed in Table 4. OWL detected all the vulnerabilities. Currently OWL incorporates two race detectors. There are other types of concurrency bugs that can also lead to concurrency attacks, including atomicity violations [52]. Atomicity violations can be detected by other detectors (e.g., CTrigger [59]). By integrating these detectors (future work), OWL’s analysis and verifier components can detect more concurrency attacks.
Because all OWL’s dynamic verifiers of are implemented based on LLDB, which only supports applications, we haven’t run these verifiers in Linux kernel. Nevertheless, OWL’s static vulnerability analyzer was applied to the Linux kernel and detected the evaluated concurrency attacks. For Linux kernel, our dynamic verifiers can be implemented in QEMU [61]. We leave the implementation in future work.
Aside from the discussed known and unknown concurrency attacks, OWL generates 180 reports in total. Due to the lack of domain knowledge and semantic understanding of program code, we didn’t verify all of these potential vulnerability reports yet. These reports could either be benign races or new concurrency attacks. Nevertheless, by greatly reducing the number of reports from 31K to 180 (Table 3 and Table 2), OWL has greatly mitigated developers’ burdens.
8.4 Detecting Previously Unknown Attacks
OWL detected 3 previously unknown concurrency attacks caused by one new data race and two known data races. Analyzing whether known data races can lead to unknown concurrency attacks is still crucial (§3.1), because once attackers break in, they may remain latent for a long time.
OWL detected a new data race and a previously unknown use-after-free concurrency attack in SSDB confirmed as CVE-2016-1000324. Figure 6 shows the details of this vulnerability. During server shutdown, SSDB uses adhoc synchronization to synchronize between threads. However, it’s possible that line 359 is executed before line 200. This race causes log_clean_thread_fun to fail to break out of the while loop. Moreover, log_clean_thread_fun could execute del_range which could use db and cause a use after free. Even more, line 347 is a function pointer dereference.
which could cause log corruption or program crash if the memory area was reused by other threads.
Owl’s static analyzer (§6.1) identified the vulnerability site at line 347 because it is a pointer dereference. This site is control dependent on the corrupted branch on line 359. Owl’s dynamic vulnerability verifier (§6.2) further verified that the other thread will free the memory area and set the pointer to NULL before the dereference within current thread. We reported this race and attack to SSDB developers.
The second previously unknown concurrency attack stems from a known data race in Apache. This attack made Apache’s own request logs be written into other users’ HTML files stored in Apache, causing a HTML integrity violation and information leak. Figure 7 shows the code of this vulnerability from the Apache-25520 bug [1]. buf->outcnt is shared among threads and serves as an index of a buffer array. Due to a lack of proper synchronization when modifying this variable on line 1362, a data race occurred and caused the server to write wrong contents to buf->outbuf.
Worse, the wrong contents could also overflow buf->outbuf and cause a buffer overflow. Even worse, Apache stores the file descriptor of its HTTP request log next to buf->outbuf. We constructed a one-byte overflow of buf->outbuf, corrupted this file descriptor, and made Apache’s own HTTP request logs be written to an HTML file with the exact corrupted value of this file descriptor.
Although this data race has been well studied by researchers [52], people thought the worst consequence of this bug would just be corrupting Apache’s own request log. We were the first to detect this HTML integrity violation attack with Owl and the first to construct the actual exploit scripts.
Owl’s vulnerability analysis (§6.1) pinpointed the vulnerable site at line 359 and inferred that this line is data dependent on the corrupted variable on line 1358. Owl’s dynamic race verifier (§5.2) triggered the race and showed how many bytes in buf->outbuf were overflowed.



of line 1192. OWL’s vulnerability verifier verified that the branch was indeed corrupted and line 1195 was reachable.
These three previously unknown concurrency attacks were overlooked by prior reliability and security tools mainly due to three reasons. First, compared to OWL’s reduced vulnerable reports, the data races of these three attacks were buried within at least 87X more false reports in Apache and 6X more in SSBDB produced by the prior TSAN race detector. Second, without OWL’s static bug-to-attack propagation analysis (§6.1), even though the races can be detected by existing race detectors, the security consequences of these bugs were unknown to detectors. Third, without OWL’s dynamic race verifier (§5.2) and vulnerability verifier (§6.2), whether these races and their attacks can be realized were unknown either.
9 Related Work
TOCTOU attacks. Time-Of-Check-to-Time-Of-Use attacks [18, 73, 79, 74] target mainly the file interface, and leverage atomics violation on time-of-check (access()) and time-of-use (open()) of a file to gain illegal file access.
A prior concurrency attack study [85] elaborates that concurrency attacks are much broader and more difficult to track than TOCTOU attacks for two main reasons. First, TOCTOU mainly causes illegal file access, while concurrency attacks can cause a much broader range of security vulnerabilities, ranging from gaining root privileges [7], injecting malicious code [6], to corrupting critical memory [1]. Second, concurrency attacks stem from miscellaneous memory accesses, and TOCTOU stem from file accesses, thus handling concurrency attacks is much more difficult than TOCTOU.
Sequential security techniques. Defense techniques for sequential programs are well studied, including taint tracking [28, 62, 55, 56], anomaly detection [27, 65], address space randomization [70], and static analysis [38, 30, 76, 17, 19].
However, with the presence of multithreading, most existing sequential defense tools can be largely weakened or even completely bypassed [85]. For instance, concurrency bugs in global memory may corrupt metadata tags in metadata tracking techniques. Anomaly detection lacks a concurrency model to reason about concurrency bugs and attacks.
Concurrency reliability tools. Various prior systems work on concurrency bug detection [87, 64, 29, 51, 53, 89, 88, 44, 80], diagnosis [67, 59, 57, 16, 43], and correction [42, 78, 82, 41]. They focused on concurrency bugs themselves, while OWL focuses on the security consequences of concurrency bugs. Therefore, these systems are complementary to OWL.
Conseq [88] detects harmful concurrency bugs by analyzing its failure consequence. Its key observation is that concurrency bugs and the bugs’ failure sites are usually within a short control and data flow propagation distance (e.g., within the same function). Concurrency attacks targeted in OWL usually exploit corrupted memory that resides in different functions, thus Conseq is inadequate for concurrency attacks. Conseq’s proactive harmful schedule exploration technique will be useful for OWL to trigger more vulnerable schedules.
Static vulnerability detection tools. There are already a variety of static vulnerability detection approaches [49, 84, 31, 15, 71, 90]. These approaches fall into two categories based on whether they target general or specific programs.
The first category [49, 84] targets general programs and their approaches have been shown to find severe vulnerabilities in large code. However, these pure static analyses may not be adequate to cope with concurrency attacks. Benjamin et al. [49] leverages pointer analysis to detect data flows from unchecked inputs to sensitive sites. This approach ignores control flow and thus it is not suitable to track concurrency attacks like the Libsaf e one in §4.3. Yamaguchi et al. [84] did not incorporate inter-procedural analysis and thus is not suitable to track concurrency attacks either. Moreover, these general approaches are not designed to reason about concurrent behaviors (e.g., [84] can not detect data races).
OWL belongs to the first category because it targets general programs. Unlike the prior approaches in this category, OWL incorporates concurrency bug detectors to reason about concurrent behaviors, and OWL’s consequence analyzer integrates critical dynamic information (i.e., call stacks) into static analysis to enable comprehensive data-flow, control-flow, and inter-procedural analysis features.
The second category [31, 15, 71, 90] lets static analysis focus on specific behaviors (e.g., APIs) in specific programs to achieve scalability and accuracy. These approaches check web application logic [31], Android applications [15], cross checking security APIs [71], and verifying the Linux Security Module [90]. OWL’s analysis is complementary to these approaches; OWL can be further integrated with these approaches to track concurrency attacks.
Symbolic execution. Symbolic execution is an advanced program analysis technique that can systematically explore a program’s execution paths to find bugs. Researchers have built scalable and effective symbolic execution systems to detect software bugs [34, 68, 33, 35, 19, 86, 20, 23, 21, 63], block malicious inputs [24], preserve privacy in error reports [22], and detect programming rule violations [25]. Specifically, UCKLEE [63] has been shown to effectively detect hundreds of security vulnerabilities in widely used programs. Symbolic execution is orthogonal to OWL; it can augment OWL’s input hints by automatically generating concrete vulnerable inputs.
10 Conclusion
We have presented the first quantitative study on real-world concurrency attacks and OWL, the first analysis framework to effectively detect them. OWL accurately detect a number of known and previously unknown concurrency attacks on large, widely used programs. We believe that our study will attract much more attention to further detect and defend against concurrency attacks. Our OWL framework has the potential to bridge the gap between concurrency bugs and their attacks. All our study results, exploit scripts, and
Owl source code with raw evaluation results are available at https://github.com/ruigulala/ConAnalysis.
References
|
{"Source-Url": "http://www.cs.columbia.edu/~ruigu/papers/conattack.pdf", "len_cl100k_base": 12398, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 54981, "total-output-tokens": 18354, "length": "2e13", "weborganizer": {"__label__adult": 0.00041294097900390625, "__label__art_design": 0.0003094673156738281, "__label__crime_law": 0.0007214546203613281, "__label__education_jobs": 0.0005345344543457031, "__label__entertainment": 8.07642936706543e-05, "__label__fashion_beauty": 0.00016427040100097656, "__label__finance_business": 0.00020551681518554688, "__label__food_dining": 0.0002911090850830078, "__label__games": 0.000972747802734375, "__label__hardware": 0.001140594482421875, "__label__health": 0.0005016326904296875, "__label__history": 0.00027179718017578125, "__label__home_hobbies": 8.529424667358398e-05, "__label__industrial": 0.00036525726318359375, "__label__literature": 0.0002605915069580078, "__label__politics": 0.0003428459167480469, "__label__religion": 0.0004112720489501953, "__label__science_tech": 0.03240966796875, "__label__social_life": 8.749961853027344e-05, "__label__software": 0.00852203369140625, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.0002777576446533203, "__label__transportation": 0.0004715919494628906, "__label__travel": 0.00017142295837402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73624, 0.04004]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73624, 0.19922]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73624, 0.88392]], "google_gemma-3-12b-it_contains_pii": [[0, 4635, false], [4635, 10945, null], [10945, 16658, null], [16658, 21882, null], [21882, 27655, null], [27655, 32269, null], [32269, 37330, null], [37330, 42866, null], [42866, 47522, null], [47522, 49787, null], [49787, 55931, null], [55931, 59887, null], [59887, 64584, null], [64584, 69137, null], [69137, 73624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4635, true], [4635, 10945, null], [10945, 16658, null], [16658, 21882, null], [21882, 27655, null], [27655, 32269, null], [32269, 37330, null], [37330, 42866, null], [42866, 47522, null], [47522, 49787, null], [49787, 55931, null], [55931, 59887, null], [59887, 64584, null], [64584, 69137, null], [69137, 73624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73624, null]], "pdf_page_numbers": [[0, 4635, 1], [4635, 10945, 2], [10945, 16658, 3], [16658, 21882, 4], [21882, 27655, 5], [27655, 32269, 6], [32269, 37330, 7], [37330, 42866, 8], [42866, 47522, 9], [47522, 49787, 10], [49787, 55931, 11], [55931, 59887, 12], [59887, 64584, 13], [64584, 69137, 14], [69137, 73624, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73624, 0.19436]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
08a942721da500fff333679ee347ed0ac11f9c66
|
<table>
<thead>
<tr>
<th>項目</th>
<th>内容</th>
</tr>
</thead>
<tbody>
<tr>
<td>タイトル</td>
<td>处理集束問題含む一時性集束問題の処理</td>
</tr>
<tr>
<td>タイトル</td>
<td>数理解析研究所講究録 数理解析研究所講究録</td>
</tr>
<tr>
<td>作者</td>
<td>岩井原 瑞穂 上林 弥彦</td>
</tr>
<tr>
<td>項目</td>
<td>URL</td>
</tr>
<tr>
<td>位置</td>
<td>URL</td>
</tr>
<tr>
<td>位置</td>
<td>部門</td>
</tr>
<tr>
<td>位置</td>
<td>部門</td>
</tr>
<tr>
<td>位置</td>
<td>部門</td>
</tr>
<tr>
<td>位置</td>
<td>部門</td>
</tr>
<tr>
<td>位置</td>
<td>部門</td>
</tr>
<tr>
<td>位置</td>
<td>部門</td>
</tr>
</tbody>
</table>
1 Introduction
Recently in the database field, deductive databases and object oriented databases are two major areas which have attracted many researchers. For deductive databases, several logical query languages, such as LDL1 [1] and ELPS [7] were proposed. They can handle complex objects by introducing set-valued variables in logic programming.
On the other hand, the nested algebra [6], which has been a representative and basic complex object language, turns out to have less expressive power, since it cannot express the transitive closure [9]. To incorporate such fundamental operations for deduction, the power-set algebra was introduced [2], which is an extension of the nested algebra obtained by adding the power-set operator.
However, these approaches cause difficulties in processing queries. Those languages are in a sense too expressive, and queries often appear that require too much time to process. Actually, by increasing the nesting level of sets, the processing time becomes exponential time, double exponential time, in the size of databases [4]. It is therefore important to investigate classes of queries that can be computed in polynomial time in set-valued logical/algebraic query languages.
Set constraint queries produce set-valued solutions, which are subsets of given base relations, and query conditions are given as "constraints", which describe the solution sets using first-order quantified predicates on their member tuples.
In [5], we showed several subclasses of queries which can be computed in polynomial time, and other subclasses which are NP-time computable. For NP-complete queries, we considered the use of data dependencies, such as join dependencies (JDs) and functional dependencies (FDs), which are maintained in the database as semantic constraints. If certain data dependencies exist, by utilizing these dependencies we can process the queries in polynomial time.
In [5] we discussed a class of constraints called binary constraints. In this paper, we consider other classes of constraints, called unary constraints. Unary constraints include conditions such
as each solution must contain at least one tuple that satisfies given conditions (existential constraints), and/or all tuples in a solution must satisfy given constraints (universal constraints). We show a processing method for unary constraints which requires polynomial time if binary constraints can be computed in polynomial time.
If we combine the method for unary constraints and the method utilizing JDs and FDs, a problem arises, since JDs and FDs are destroyed while processing unary constraints. For query constraints, we consider conditions which preserve JDs and FDs in order to apply the two methods successfully.
For the class of JDs and FDs, it is generally impossible to express all dependencies in a given relational expression accurately [3]. However, we will mainly be concerned with preserving dependencies, and will also discuss general relational algebra expressions including set difference and union operators.
2 Preliminaries
Relational database
For a given attribute set \( X = \{A_1, A_2, \ldots, A_n\} \), a relation \( R(X) \) on \( X \) is a finite set of mappings \( \tau : \{A_1, A_2, \ldots, A_n\} \mapsto D_1 \times D_2 \times \cdots \times D_n \), where each \( D_i \) is the domain of attribute \( A_i \), and each \( A_i \) is mapped to an element of \( D_i \). Each mapping \( \tau \) shall be called a tuple of \( R(X) \). \( X \) shall be called a relational scheme of \( R(X) \). In general, the relational scheme of a relation \( R \) shall be denoted by \( \underline{R} \).
The relation \( R \) can be represented as a table which has attributes for rows, and tuples for columns. In this paper we use the first alphabet letters \( A, B, C, \ldots \) for the names of single attributes, and the last alphabet letters \( \ldots, X, Y, Z \) for the names of attribute sets. The union of two attribute sets may be simply denoted by the concatenation of their names.
Relational Algebra
Relational algebra is a procedural query language for relational databases. The six major operations of relational algebra are described as follows. The value for an attribute set \( X \) of a tuple \( \mu \) is denoted by \( \mu[X] \).
For relations \( R_1(X) \) and \( R_2(X) \) of the same scheme \( X \), \( R_1 - R_2 \) and \( R_1 \cup R_2 \) are the set difference and union of tuple sets of \( R_1 \) and \( R_2 \), respectively.
For a relation \( R(X) \) and an attribute set \( Y(\subseteq X) \), \( R[Y] = \{\mu[Y] | \mu \in R\} \) is called the projection of \( R \) onto \( Y \).
Let \( \theta \) be one of the comparison operators \( =, \geq, \leq, <, \neq \). For \( \theta \), a constant \( c \), a relation \( R(X) \) and an attribute \( A(\in X) \), \( R[A \theta c] = \{ \mu | \mu \in R, \mu[A] = c \} \) is called the selection of \( R \) on \( Y \).
Let \( R_1(X_1) \) and \( R_2(X_2) \) be relations. The \((natural)\) join of \( R_1 \) and \( R_2 \) shall be denoted by \( R_1 \Join R_2 = R(X_1X_2) \), where \( R(X_1X_2) = \{ \mu | \mu[X_1] \in R_1, \mu[X_2] \in R_2 \} \). The Cartesian product of \( R_1 \) and \( R_2 \) is denoted by \( R_1 \times R_2 \).
Tuple Relational Calculus (TRC)
*Tuple relational calculus* (TRC) is a declarative language for relational databases. TRC is defined as follows.
For a predicate symbol \( p \), \( p(\tau) \) is an atomic formula which states that \( \tau \) is in the relation of \( p \). \( X \theta Y \) is an atomic formula, where each \( X \) and \( Y \) is either a constant or a value of a tuple variable, and \( \theta \) is an arithmetic comparison operator. An atomic formula is a TRC expression.
For the TRC expressions \( f_1 \) and \( f_2 \), and the tuple variable \( \tau \) which appears freely in \( f_1 \) and \( f_2 \), then \( f_1 \land f_2, f_1 \lor f_2, \neg f_1, (\exists \tau)f_1, (\forall \tau)f_1 \) are all TRC expressions.
For the TRC expression \( f \) with unique free variable \( \tau \), the relation value which \( f \) defines is \( F = \{ \tau | f(\tau) \} \).
For each 'safe' TRC expression, there exists a relational algebra expression which defines the same relation [10]. If a TRC expression \( f \) is not safe, \( f \) may define a relation of infinite size.
Functional Dependencies and Join Dependencies
For a relation \( R(X) \), \( R \) is said to satisfy the \( functional dependency \) (FD) \( X_1 \rightarrow X_2 (X_1, X_2 \subseteq X) \), if for any two tuples \( \mu_1 \) and \( \mu_2 \) of \( R \), \( \mu_1[X_1] = \mu_2[X_1] \) implies \( \mu_1[X_2] = \mu_2[X_2] \).
A relation \( R \) is said to satisfy a \( join dependency \) (JD) \( \Join[X_1, X_2, \ldots, X_n] \), if \( R = R[X_1] \Join R[X_2] \Join \cdots \Join R[X_n] \) holds. Here we call each \( X_i \) a \textit{component} of the JD.
3 Set constraint query
**Definition 1** : Let \( R \) be a relation of the scheme \( R \). The \( query constraint \) \( C(S) \) is a condition for relations \( S \) that are subsets of \( R \). Three types of query constraints are described as follows:
\[
Ch(S) = \bigwedge_{i=1}^{h} (\forall \mu \in S)(\forall \nu \in S) b_i(\mu, \nu) \quad \text{(binary constraints)},
\]
\[
Ca(S) = \bigwedge_{i=1}^{l} (\forall \mu \in S) a_i(\mu) \quad \text{(universal constraints)},
\]
where $\mu$ and $\nu$ are tuple variables on $R$. $R$ is called a base relation. Each $b_i(\mu, \nu)$ is a TRC expression upon free variables $\mu$ and $\nu$. Each $a_i(\mu)$ and $e_i(\mu)$ is a safe TRC expression upon a free variable $\mu$. Existential and universal constraints are also called unary constraints.
For query constraints $C(S) = Cb(S) \wedge Ca(S) \wedge Ce(S)$, the following function $Q_C(R)$ defined below is a set constraint query on $R$.
$$Q_C(R) = \{S \mid S \subseteq R \wedge C(S)\}$$
Each $S \in Q_C(R)$ is called a solution of $Q_C(R)$.
Note that in the above definition for $Cb$ and $Ca$, the conjunctions of universally quantified clauses can be transformed into single universally-quantified clauses.
For the predicate $b(\mu, \nu)$ of a binary constraint, since each pair $\mu$ and $\nu$ of tuples in each solution $S$ must satisfy $b(\mu, \nu)$, $b(\mu, \nu)$ and $b(\mu, \mu)$, we can assume that $b(\mu, \nu)$ is symmetric and reflexive without loss of generality. Thus we can define an undirected graph for the binary predicate $b(\mu, \nu)$ as follows.
**Definition 2:** For the predicate $b(\mu, \tau)$ of a binary constraint $Cb$ and a base relation $R$, let $B(R, b)$ be an undirected graph, called a tuple graph for $R$ and $b$, obtained as follows.
The node with label $\mu$ corresponds to a tuple $\mu$ of $R$, and there is an edge between two nodes $\mu$ and $\tau$ in $B(R, b)$ if and only if $b(\tau, \mu)$ holds.
For a set constraint query $Q_{Cb}(R)$ consisting a binary constraint $Cb$, each solution $S$ of $Q_{Cb}(R)$ corresponds to a clique of the tuple graph $B(R, b)$, since each pair of tuples in $S$ must satisfy $b$.
An undirected graph of $n$ nodes may have up to $2^n$ cliques, thus a set constraint query may have an exponential number of solutions. It is not practical to generate all solutions.
In the following, we consider combinatorial problems which are described by set constraint queries, where the optimal solutions correspond to maximum cardinality solutions of set constraint queries. We call maximum cardinality solutions simply maximum solutions, and we consider algorithms which produce a maximum solution. Note that for a query there are solutions of the same maximum cardinality, and algorithms will choose one of the maximum solutions.
There are two types of complexity measures for database queries. For a set constraint query \( Q_C(R) \), if \( S \in Q_C(R) \) can be decided in time (or space) bounded by a function \( f \) of the total size of relations on which \( C \) is defined, then \( Q_C(R) \) is said to have time (or space) data complexity \( f \). Alternatively, if \( f \) is a function of the length of expression \( C \), then \( Q_C(R) \) has expression complexity \( f \) [11]. In this paper, we will consider only the data complexity measure.
**Example 1:** Let us consider the following job assignment problem. Let \( R_1(EJT) \) be a relation consisting of attribute \( E \) for employee names, attribute \( T \) for time slots, and attribute \( J \) for types of jobs. A tuple of \( R_1 \) means that a certain employee can engage in a certain type of job at a certain time. A job assignment problem is finding a subset \( S \) of \( R_1 \) which satisfies some given constraints. Subset \( S \) is optimal if it is an assignment of the greatest possible number of jobs to employees. Figure 1 shows an instance of \( R_1(EJT) \).
The constraint is that each employee can engage in only one type of job at the same time slot. This constraint is described by the following binary constraint:
\[
Cb_1(S) = (\forall \mu \in S)(\forall \nu \in S)(R_1(\mu) \land R_1(\nu) \land \\
(\neg(\mu[E] = \nu[E] \land \mu[T] = \nu[T]) \lor (\mu[J] = \nu[J])))
\]
\(\square\)
<p>| | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>E</strong></td>
<td><strong>J</strong></td>
<td><strong>T</strong></td>
</tr>
<tr>
<td>Mike</td>
<td>cashier</td>
<td>A</td>
</tr>
<tr>
<td>Mike</td>
<td>cashier</td>
<td>B</td>
</tr>
<tr>
<td>Mike</td>
<td>waiter</td>
<td>A</td>
</tr>
<tr>
<td>Tom</td>
<td>cashier</td>
<td>A</td>
</tr>
<tr>
<td>Tom</td>
<td>kitchen</td>
<td>B</td>
</tr>
<tr>
<td>Tom</td>
<td>waiter</td>
<td>C</td>
</tr>
<tr>
<td>Tom</td>
<td>waiter</td>
<td>D</td>
</tr>
<tr>
<td>John</td>
<td>cashier</td>
<td>B</td>
</tr>
<tr>
<td>John</td>
<td>waiter</td>
<td>D</td>
</tr>
</tbody>
</table>
Figure 1: An instance of \( R_1(EJT) \)
The above binary constraint \( Cb_1 \) is equivalent to an FD \( ET \rightarrow J \). Thus we can use FD sets to describe query constraints.
**Definition 3:** A set constraint query whose binary constraints can be expressed as a set of FDs, is called an \( FD \) set constraint query. \(\square\)
4 Processing unary constraints
Example 2: In addition to the constraints of Example 1, we assume that the assignment must include at least one female employee. This constraint can be described by the following existential constraint $Ce_1$:
$$Ce_1(S) = (\exists \mu \in S)(R_1(\mu) \wedge (\exists \nu)(R_2(\nu) \wedge \mu[E] = \nu[E] \wedge \nu[M] = \text{"female"}),$$
where $R_2(EMA)$ is a relation which contains tuples such that employee $E$ has sex $M$ and age $A$. The query for the above condition is described as $Q_{CBACe1}(R_1)$.
In the following, we show an algorithm which solves a query of the form $Q_{CBACu}$ consisting of the binary constraints $Cb$ and the unary constraints $Cu$, by first converting it into a set of queries of the form $Q_Cb$ which do not include the unary constraints $Cu$.
Every unary constraint $Cu$ can be described as the conjunction of universal constraint $Ca$ and existential constraint $Ce$.
$$Cu(S) = Ca(S) \wedge Ce(S),$$
where
$$Ca(S) = (\forall \mu \in S)g(\mu), \quad Ce(S) = \bigwedge_{i=1}^{k}(\exists \mu \in S)f_i(\mu).$$
Algorithm 1: Evaluation of set constraint queries including unary constraints
Input: A set constraint query $Q_{CBACu}(R)$, where $Cb$ is a binary constraint and $Cu$ is a unary constraint, and $R$ is a base relation.
Output: A maximum solution of $Q_{CBACu}(R)$.
Method: We assume that this algorithm can call some Algorithm $\alpha$ that produces one maximum solution of $Q_{Cb}(R)$.
(1) Evaluate the predicates $f_1, \ldots, f_k$ and $g$ into the finite relations $F_1, \ldots, F_k$ and $G$, respectively. Note that the tuples which should be tested whether satisfying those predicates can be limited to the base relation. Therefore the relations $F_1, \ldots, F_k$ and $G$ become finite, and subsets of the base relation $R$. Each solution must be a subset of $G$ to satisfy the universal constraint.
(2) Construct the tuple graph $B(G,b)$ from $G$ and $b$.
6
(3) Find all tuple sets \( q_1, \ldots, q_m \) such that for every \( q_j = \{ \mu_1, \ldots, \mu_k \} \) (\( \mu_i \)'s need not be distinct), each \( \mu_i \) is taken from \( F_i \), and each pair of tuples \( \mu_i \) and \( \mu_i \) satisfies \( b(\mu_i, \mu_i) \). Each \( q_j \) thus corresponds to a clique of \( B(G, b) \). Each solution of \( Q_{C\bar{b}ACu}(R) \) must contain \( q_i \) for some \( i \) to satisfy the existential constraints \( C_e \).
(4) For each clique \( q_i, i = 1, \ldots, m \), let \( W_i \) be the relation obtained by removing all tuples \( \tau \) such that there exists a tuple \( \mu \) in \( q_i \) for which \( b(\tau, \mu) \) does not hold.
(5) For each \( W_i, i = 1, \ldots, m \), find a maximum solution \( S_i \) of \( Q_{Cb}(W_i) \) by applying Algorithm \( \alpha \) for \( Q_{Cb} \). Report one of the maximum \( S_i \)'s.
**Theorem 1:** For a set constraint query on \( R \) consisting of a binary constraint \( Cb \), if a maximum solution of \( Q_{Cb}(R) \) can be computed in polynomial time for any base relation \( R \), the query \( Q_{C\bar{b}ACu}(R) \) consisting of \( Cb \) and an arbitrary unary constraint \( Cu \) can also be computed in polynomial time.
**Proof:** It is easily seen that the maximum \( S_i \) in Step 5 of Algorithm 1 satisfies all constraints \( Cb, Ca \) and \( Cu \), and maximum in the solutions of \( Q_{C\bar{b}ACu}(R) \).
We now show that the computation may be performed in polynomial time. Let \( n \) be the number of tuples in the base relation \( R \), denoted by \( |R| \).
Since every relational algebra expression can be evaluated in polynomial time [11], and since \( |F_i| \leq n, \ldots, |F_k| \leq n \), and \( |G| \leq n \), the relations \( F_1, \ldots, F_k \), and \( G \) may all be computed in polynomial time.
Step 2 can be computed in polynomial time, because the binary predicate \( b(\tau, \mu) \) defines a relation which is a subset of \( G \times G \) corresponding to the edge set of \( B(G, b) \). In Step 3, since each clique \( q_i \) corresponds to an element of the Cartesian product \( F_1 \times \cdots \times F_k \), the number of cliques, \( m \), is no more than \( n^k \). \( |G| \leq n \) implies that \( |W_i| \leq n \) for \( 1 \leq i \leq m \). From the hypothesis, a maximum solution of \( Q_{Cb}(W_i) \) can be computed in polynomial time, that is, \( O(n^p) \) time for some positive constant \( p \). Then Step 5 can be computed in \( O(n^p m) \) time, hence \( O(n^p+k) \) time. Here \( k \), the number of predicates in the existential constraint, is independent of \( n \).
Note that in Step 3 of Algorithm 1, if no clique \( q_i \) is generated, then there is no solution for \( Q_{C\bar{b}ACu}(R) \). In this case, we can abort the computation of solutions before evaluating \( Q_{Cb} \) in Step 5. On the other hand, if there exists a non-empty \( q_i \), it is a solution for \( Q_{C\bar{b}ACu}(R) \).
5 Processing queries utilizing database dependencies
In FD set constraint queries, FDs are used as the query constraints. However, FDs have been used to express semantic constraints and are often maintained in database management systems. Since query conditions and data dependencies can be compared of the same level, we can take advantage of these data dependencies for query processing. In this section, we give an example showing that an NP-complete query for arbitrary input base relations can be processed in polynomial time if certain dependencies such as FDs and JDs exist in the base relation. More detailed query processing methods utilizing data dependencies are discussed in [5].
Example 3: In the job assignment problem of the previous examples, let us suppose that each employee is assigned only one type of job, and each type of job requires only one time slot and only one employee. This constraint is equivalent to computing a one-to-one correspondence between the values of $E$, $T$ and $J$. This correspondence is denoted by the FD binary constraint:
$$C_{b2} = \{E \rightarrow J, J \rightarrow T, T \rightarrow E\}$$
Proposition 1: The problem of deciding whether $Q_{Cb_2}(R_1)$ has a solution of $n$ tuples is NP-complete.
Proof: (sketch) By a reduction of the 3D-MATCHING problem [8] to $Q_{Cb_2}(R_1)$. □
If the base relation $R_1(ETJ)$ satisfies certain data dependencies, Proposition 1 fails if $P \neq NP$. For example, if all employees care only about the types of jobs and don't care about the time slots, $R_1$ satisfies $M[EJ, JT]$, and can be decomposed into $R_1[EJ]$ and $R_1[JT]$. In this case, we can compute a maximum solution of $Q_{Cb_2}(R_1)$ by constructing a network from $R_1[EJ]$ and $R_1[JT]$, and then applying the maximum network flow algorithm to this network [5].
6 Changes of data dependencies in the presence of unary constraints
In the previous section, we observed that if certain dependencies exist, there is considerable reduction in the computational complexity of queries. However, by adding unary constraints to queries, the required data dependencies of base relations may be destroyed.
Example 4: In addition to the constraint $Cb_2$ in Example 3, consider the following constraint. Let us suppose that the employees younger than 20 must not work in the midnight time slot.
This condition is denoted by the following universal constraint $Ca_2$:
$$Ca_2(S) = (\forall \mu \in S)g_1(\mu),$$
where
$$g_1(\mu) = (R_1(\mu) \land \neg((\exists \nu)(R_2(\nu) \land \mu[E] = \nu[E] \land \mu[T] = "D" \land \nu[A] < 20))).$$
The relation of the TRC expression $g_1$, namely $G_1 = \{\mu | g_1(\mu)\}$, is transformed into the following relational algebra expression:
$$G_1 = R_1 - (R_1[T = "D"] \bowtie R_2[A < 20])[ETJ].$$
The above expression contains a set difference. The relation $G_1$ does not satisfy $\bowtie[EM, JT]$ while $R_1$ does. Consider the instances provided in Figure 2. Figure 2-(a) shows an instance of $R_1$ that satisfies $\bowtie[EM, JT]$. Figure 2-(b) shows an instance of $R_2(EMA)$. Figure 2-(c) shows the relation $G_1$ produced by the above relational expression. Since $G_1$ does not contain the tuple (Tom, kitchen, D), $G_1$ does not satisfy $\bowtie[EM, JT]$. □
<table>
<thead>
<tr>
<th>$E$</th>
<th>$J$</th>
<th>$T$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tom</td>
<td>kitchen</td>
<td>C</td>
</tr>
<tr>
<td>Tom</td>
<td>kitchen</td>
<td>D</td>
</tr>
<tr>
<td>John</td>
<td>kitchen</td>
<td>C</td>
</tr>
<tr>
<td>John</td>
<td>kitchen</td>
<td>D</td>
</tr>
</tbody>
</table>
(a) $R_1(EMA)$
<table>
<thead>
<tr>
<th>$E$</th>
<th>$M$</th>
<th>$A$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tom</td>
<td>male</td>
<td>18</td>
</tr>
<tr>
<td>John</td>
<td>male</td>
<td>24</td>
</tr>
</tbody>
</table>
(b) $R_2(EMA)$
<table>
<thead>
<tr>
<th>$E$</th>
<th>$J$</th>
<th>$T$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tom</td>
<td>kitchen</td>
<td>C</td>
</tr>
<tr>
<td>John</td>
<td>kitchen</td>
<td>C</td>
</tr>
<tr>
<td>John</td>
<td>kitchen</td>
<td>D</td>
</tr>
</tbody>
</table>
(c) $G_1(EMA)$
Figure 2: Destruction of a join dependency
In the case where the data dependencies used by some algorithm $\alpha$ to satisfy binary constraints are destroyed while processing unary constraints, as Steps 1–4 of Algorithm 1, we cannot use Algorithm $\alpha$ at Step 5 anymore. Furthermore, there are cases where a query which
is polynomial-time computable when certain dependencies exist, becomes NP-complete when unary constraints are added.
Suppose that a base relation satisfies a set \( D \) of data dependencies, and that a query \( Q_{Cb} \) of a binary constraint \( Cb \) can be processed by an algorithm \( \alpha \) which presupposes the existence of \( D \). If each \( W_i \) in Step 4 satisfies \( D \), then we can use Algorithm \( \alpha \) at Step 5 without changing Algorithm 1. Therefore, we must specify conditions for binary and unary constraints which preserve data dependencies in the steps of Algorithm 1.
7 Query constraints preserving dependency sets
In this section, we discuss sufficient conditions for binary and unary constraints which guarantee that unary constraints may be processed in polynomial time, by showing that a set \( D \) of JDs and FDs is preserved during the execution of Algorithm 1.
7.1 FD sets
**Lemma 1**: If a relation \( R(Z) \) satisfies an FD \( X \rightarrow Y \), \( (X, Y \subset Z) \), then any \( S \subset R \) satisfies \( X \rightarrow Y \).
**Proof**: Suppose that \( S \) does not satisfy \( X \rightarrow Y \). Then there exist tuples \( \mu, \nu \in S \) such that \( \mu[X] = \nu[X] \) and \( \mu[Y] \neq \nu[Y] \). Since \( \mu, \nu \in R \), \( R \) does not satisfy \( X \rightarrow Y \), which is a contradiction. \( \square \)
Each \( W_i \) in Algorithm 1 is a subset of the base relation \( R \). By Lemma 1, each \( W_i \) satisfies all FDs which \( R \) satisfies.
7.2 JD sets
For a TRC expression \( f(\mu) \) on \( R \) that has \( \mu \) as a free variable, let \( att(f) (\subseteq R) \) be the union of the attributes of \( \mu \) appearing in formulas of the form \( X \theta Y \) in \( f \). For instance, for \( g_1 \) of Example 4, \( att(g_1) = ET \).
**Lemma 2**: Suppose that a relation \( R \) satisfies a JD \( j = [X_1, \ldots, X_m] \). For a safe TRC expression \( f(\mu) \) on \( R \), if there exists a component \( X_h \) of \( j \) such that \( att(f) \subseteq X_h \), then for \( F = \{ \mu | f(\mu) \} \),
\[
F = F[X_h] \bowtie R[U_{i \neq h} X_i].
\]
**Proof**: Suppose that \( \mu \in F \). Then \( \mu[X_h] \in F[X_h] \). Since \( F \subseteq R \), we have \( F[X_h] \subseteq R[X_h] \), and
\[
\mu \in \{\mu[X_h]\} \bowtie R[U_{i \neq h} X_i] \subseteq F[X_h] \bowtie R[U_{i \neq h} X_i].
\]
Suppose that $\mu \in F[X_h] \Join R[\bigcup_{i \neq h} X_i]$. Since $j$ implies $\Join[X_h, \bigcup_{i \neq h} X_i]$, we have $\mu \in R$. The attributes of $\mu$ that appear in $f(\mu)$ are included in $X_h$, and $\mu[X_h] \in F[X_h]$, hence the values of $\mu$ satisfy $f$. Therefore $\mu \in F$. $\square$
**Lemma 3**: For a safe TRC expression $f(\mu)$ on $R$, suppose that $f(\mu) = f_1(\mu) \land \ldots \land f_k(\mu)$. Also suppose that a relation $R$ satisfies $j = \Join[X_1, \ldots, X_m]$. For $1 \leq i \leq k$, if there exists a component $X_{p(i)}$ ($1 \leq p(i) \leq m$) of $j$ such that $\text{att}(f_i) \subseteq X_{p(i)}$, then $F = \{ \mu \mid f(\mu) \}$ satisfies $j$.
**Proof**: Let $F_i = \{ \mu \mid f_i(\mu) \}$, then $F_i \subseteq R$ and $F = \cap_i F_i$. If $F$ does not satisfy $j$, we can assume that there exist $m$ (not necessarily distinct) tuples $\nu_1, \ldots, \nu_m$ in $F$, and that there does not exist a tuple $\nu$ in $F$ such that $\nu[X_1] = \nu_1[X_1], \ldots, \nu[X_m] = \nu_m[X_m]$. Then for some $h$, $\nu \notin F_h$ and $\nu[X_{p(h)}] = \nu_{p(h)}[X_{p(h)}] \in F_h [X_{p(h)}]$. By Lemma 2, $F_h = F_h [X_{p(h)}] \Join R[\bigcup_{i \neq p(h)} X_i]$, hence $\nu \in \{ \nu[X_{p(h)}] \} \Join R[\bigcup_{i \neq p(h)} X_i] \subseteq F_h$. Thus $\nu \in F_h$, a contradiction. $\square$
Henceforth, we will say that a TRC expression $f(\mu)$ is consistent with $j$ if it satisfies the condition of Lemma 3.
### 7.3 Sufficient conditions for preserving dependency sets
**Theorem 2**: Suppose that for a set constraint query $Q_{Cb}$ on $R$ consisting of a binary constraint $Cb$, and that a maximum solution of $Q_{Cb}(R)$ can be computed in polynomial time if $R$ satisfies a set $D$ of JDs and FDs. Then, for an arbitrary universal constraint $Ca$ whose TRC expression on $R$ is consistent with each JD of $D$, a maximum solution of the query $Q_{Cb\wedge Ca}$ can be computed in polynomial time.
**Proof**: The maximum solution of the query $Q_{Cb\wedge Ca}(R)$ can be computed by Algorithm 1, since by Lemma 1 and Lemma 3, the relation $G$ in Step 1 satisfies all the dependencies of $D$. $\square$
**Theorem 3**: For $Q_{Cb}(R)$ and $D$ defined in Theorem 2, consider the predicate $b(\mu, c)$ obtained by substituting a variable $\nu$ of the binary predicate $b(\mu, \nu)$ of $Cb$ by a constant tuple $c$. If $b(\mu, c)$ is consistent with each JD of $D$, then for an arbitrary existential constraint $Ce$ on $R$, a maximum solution of the query $Q_{Cb\wedge Ca}(R)$ can be computed in polynomial time.
**Proof**: By Theorem 1, it is sufficient to show that each $W_i$ in Step 4 of Algorithm 1 satisfies $D$. By Lemma 1, $W_i$ satisfies each FD in $D$. For each $W_i$, let $q_i = \{ c_1, \ldots, c_h \}$, where the $c_j$'s are constant tuples, not necessarily distinct. Then, $W_i$ can be written
$$W_i = \{ \mu \mid b(\mu, c_1) \land \ldots \land b(\mu, c_h) \}.$$
Note that $b(\mu, \nu)$ is reflexive and symmetric. The above expression is consistent with each JD of $D$ by Lemma 3 and the hypothesis of the theorem. Therefore $W_i$ satisfies each JD of $D$. \qed
**Theorem 4:** For $Q_{Cb}(R)$ and $D$ of Theorem 2, suppose that the binary constraint $Cb$ is equivalent to a set $F$ of FDs. For each $X \rightarrow Y$ in $F$ and each JD $j$ in $D$, if $X \cup Y$ is contained in a component of $j$, then for an arbitrary existential constraint $Ce$ on $R$, a maximum solution of the query $Q_{Cb\wedge Ce}(R)$ can be computed in polynomial time.
**Proof:** Since the binary predicate of $Cb$ can be denoted as a conjunction of clauses corresponding to FDs of $F$, $Cb$ satisfies the condition of Theorem 3. \qed
**References**
|
{"Source-Url": "https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/101959/1/0731-25.pdf", "len_cl100k_base": 8454, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 60387, "total-output-tokens": 9555, "length": "2e13", "weborganizer": {"__label__adult": 0.0003631114959716797, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.0006351470947265625, "__label__education_jobs": 0.004779815673828125, "__label__entertainment": 0.0001220107078552246, "__label__fashion_beauty": 0.0002157688140869141, "__label__finance_business": 0.000789642333984375, "__label__food_dining": 0.0005269050598144531, "__label__games": 0.0008177757263183594, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0009145736694335938, "__label__history": 0.0004096031188964844, "__label__home_hobbies": 0.00022590160369873047, "__label__industrial": 0.0009465217590332032, "__label__literature": 0.0007076263427734375, "__label__politics": 0.0003659725189208984, "__label__religion": 0.00060272216796875, "__label__science_tech": 0.237548828125, "__label__social_life": 0.00018477439880371096, "__label__software": 0.0181732177734375, "__label__software_dev": 0.72900390625, "__label__sports_fitness": 0.0002536773681640625, "__label__transportation": 0.00066375732421875, "__label__travel": 0.00022482872009277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29248, 0.02321]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29248, 0.63559]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29248, 0.83833]], "google_gemma-3-12b-it_contains_pii": [[0, 1212, false], [1212, 3328, null], [3328, 5855, null], [5855, 8565, null], [8565, 10887, null], [10887, 12954, null], [12954, 14911, null], [14911, 17864, null], [17864, 20207, null], [20207, 21833, null], [21833, 24214, null], [24214, 27139, null], [27139, 29248, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1212, true], [1212, 3328, null], [3328, 5855, null], [5855, 8565, null], [8565, 10887, null], [10887, 12954, null], [12954, 14911, null], [14911, 17864, null], [17864, 20207, null], [20207, 21833, null], [21833, 24214, null], [24214, 27139, null], [27139, 29248, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29248, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29248, null]], "pdf_page_numbers": [[0, 1212, 1], [1212, 3328, 2], [3328, 5855, 3], [5855, 8565, 4], [8565, 10887, 5], [10887, 12954, 6], [12954, 14911, 7], [14911, 17864, 8], [17864, 20207, 9], [20207, 21833, 10], [21833, 24214, 11], [24214, 27139, 12], [27139, 29248, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29248, 0.21164]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
d4ad8ddc4cdeb20fef0045a937d916fcce65b623
|
PROFESSOR: Well, yesterday we learned a bit about symbolic manipulation, and we wrote a rather stylized program to implement a pile of calculus rule from the calculus book. Here on the transparencies, we see a bunch of calculus rules from such a book. And, of course, what we did is sort of translate these rules into the language of the computer. But, of course, that's a sort of funny strategy. Why should we have to translate these rules into the language of the computer? And what do I really mean by that?
These are--the program we wrote yesterday was very stylized. It was a conditional, a dispatch on the type of the expression as observed by the rules. What we see here are rules that say if the object being the derivative is being taken of, if that expression is a constant, then do one thing. If it's a variable, do another thing. If it's a product of a constant times a variable, do something and so on. There's sort of a dispatch there on a type.
Well, since it has such a stylized behavior and structure, is there some other way of writing this program that's more clear? Well, what's a rule, first of all? What are these rules?
Let's think about that. Rules have parts. If you look at these rules in detail, what you see, for example, is the rule has a left-hand side and a right-hand side. Each of these rules has a left-hand side and the right-hand side. The left-hand side is somehow compared with the expression you're trying to take the derivative of. The right-hand side is the replacement for that expression. So all rules on this page are something like this.
I have patterns, and somehow, I have to produce, given a pattern, a skeleton. This is a rule. A pattern is something that matches, and a skeleton is something you substitute into in order to get a new expression. So what that means is that the pattern is matched against the expression, which is the source expression. And the result of the application of the rule is to produce a new expression, which I'll call a target, by instantiation of a skeleton. That's called instantiation. So that is the process by which these rules are described.
What I'd like to do today is build a language and a means of interpreting that language, a means of executing that language, where that language allows us to directly express these rules. And what we're going to do is instead of bringing the rules to the level of the computer by writing a program that is those rules in the computer's language---at the moment, in a Lisp---we're going to bring the computer to the level of us by writing a way by which the computer can understand rules of this sort.
This is slightly emphasizing the idea that we had last time that we're trying to make a solution to a class of problems rather than a particular one. The problem is if I want to write rules for a different piece of mathematics, say, to simple algebraic simplification or something like that, or manipulation of trigonometric functions, I would have to write a different program in using yesterday's method. Whereas I would like to encapsulate all of the
things that are common to both of those programs, meaning the idea of matching, instantiation, the control structure, which turns out to be very complicated for such a thing, I'd like to encapsulate that separately from the rules themselves.
So let's look at, first of all, a representation. I'd like to use the overhead here. I'd like-- there it is. I'd like to look at a representation of the rules of calculus for derivatives in a sort of simple language that I'm writing right here. Now, I'm going to avoid--I'm going to avoid worrying about syntax. We can easily pretty this, and I'm not interested in making-- this is indeed ugly. This doesn't look like the beautiful text set $dx$ by $dt$ or something that I'd like to write, but that's not essential. That's sort of an accidental phenomenon.
Here, we're just worrying about the fact that the structure of the rules is that there is a left-hand side here, represents the thing I want to match against the derivative expression. This is the representation I'm going to say for the derivative of a constant, which we will call $c$ with respect to the variable we will call $v$. And what we will get on the right-hand side is 0. So this represents a rule.
The next rule will be the derivative of a variable, which we will call $v$ with respect to the same variable $v$, and we get a 1. However, if we have the derivative of a variable called $u$ with respect to a different variables $v$, we will get 0. I just want you look at these rules a little bit and see how they fit together. For example, over here, we're going to have the derivative of the sum of an expression called $x_1$ and an expression called $x_2$. These things that begin with question marks are called pattern variables in the language that we're inventing, and you see we're just making it up, so pattern variables for matching.
And so in this-- here we have the derivative of the sum of the expression which we will call $x_1$. And the expression we will call $x_2$ with respect to the variable we call $v$ will be-- here is the right-hand side: the sum of the derivative of that expression $x_1$ with respect to $v$-- the right-hand side is the skeleton-- and the derivative of $x_2$ with respect to $v$. Colons here will stand for substitution objects. They're--we'll call them skeleton evaluations.
So let me put up here on the blackboard for a second some syntax so we'll know what's going on for this rule language. First of all, we're going to have to worry about the pattern matching. We're going to have things like a symbol like foo matches exactly itself. The expression $f(a, b)$ will be used to match any list whose first element is $f$, whose second element is $a$, and whose third element is $b$.
Also, another thing we might have in a pattern is that-- a question mark with some variable like $x$. And what that means, it says matches anything, which we will call $x$. Question mark $c$ $x$ will match only constants. So this is something which matches a constant colon $x$. And question mark $v$ $x$ will match a variable, which we call $x$.
This is sort of the language we're making up now. If I match two things against each other, then they are
compared element by element. But elements in the pattern may contain these syntactic variables, pattern variables, which will be used to match arbitrary objects. And we'll get that object as the value in the name x here, for example.
Now, when we make skeletons for instantiation. Well, then we have things like this. foo, a symbol, instantiates to itself. Something which is a list like f of a and b, instantiates to-- well, f instantiates to a 3-list, a list of three elements, okay, which are the results of instantiating each of f, a, and b. And x well-- we instantiate to the value of x as in the matched pattern.
So going back to the overhead here, we see-- we see that all of those kinds of objects, we see here a pattern variable which matches a constant, a pattern variable which matches a variable, a pattern variable which will match anything. And if we have two instances of the same name, like this is the derivative of the expression which is a variable only whose name will be v with respect to some arbitrary expression which we will call v, since this v appears twice, we're going to want that to mean they have to be the same.
The only consistent match is that those are the same. So here, we're making up a language. And in fact, that's a very nice thing to be doing. It's so much fun to make up a language. And you do this all the time. And the really most powerful design things you ever do are sort of making up a language to solve problems like this.
Now, here we go back here and look at some of these rules. Well, there's a whole set of them. I mean, there's one for addition and one for multiplication, just like we had before. The derivative of the product of x1 and x2 with respect to v is the sum of the product of x1 and the derivative x2 with respect to v and the product of the derivative of x1 and x2. And here we have exponentiation. And, of course, we run off the end down here. We get as many as we like. But the whole thing over here, I'm giving this--this list of rules the name "derivative rules."
What would we do with such a thing once we have it? Well, one of the nicest ideas, first of all, is I'm going to write for you, and we're going to play with it all day. What I'm going to write for you is a program called simplifier, the general-purpose simplifier. And we're going to say something like define dsimp to be a simplifier of the derivative rules. And what simplifier is going to do is, given a set of rules, it will produce for me a procedure which will simplify expressions containing the things that are referred to by these rules.
So here will be a procedure constructed for your purposes to simplify things with derivatives in them such that, after that, if we're typing at some list system, and we get a prompt, and we say dsimp, for example, of the derivative of the sum of x and y with respect to x-- note the quote here because I'm talking about the expression which is the derivative-- then I will get back as a result plus 1 0. Because the derivative of x plus y is the derivative of x plus derivative y. The derivative of x with respect to x is 1. The derivative of y with respect to x is 0. It's not what we're going to get. I haven't put any simplification at that level-- algebraic simplification-- yet.
Of course, once we have such a thing, then we can--then we can look at other rules. So, for example, we can, if we go to the slide, OK? Here, for example, are other rules that we might have, algebraic manipulation rules, ones that would be used for simplifying algebraic expressions. For example, just looking at some of these, the left-hand side says any operator applied to a constant e1 and a constant e2 is the result of evaluating that operator on the constants e1 and e2. Or an operator, applied to e1, any expression e1 and a constant e2, is going to move the constant forward. So that'll turn into the operator with e2 followed by e1. Why I did that, I don't know. It wouldn't work if I had division, for example. So there's a bug in the rules, if you like.
So the sum of 0 and e is e. The product of 1 and any expression e is e. The product of 0 and any expression e is 0. Just looking at some more of these rules, we could have arbitrarily complicated ones. We could have things like the product of the constant e1 and any constant e2 with e3 is the result of multiplying the result of multiplying now the constants e1 and e2 together and putting e3 there. So it says combine the constants that I had, which was if I had a product of e1 and e2 and e3 just multiply--I mean and e1 and e2 are both constants, multiply them.
And you can make up the rules as you like. There are lots of them here. There are things as complicated, for example, as-- oh, I suppose down here some distributive law, you see. The product of any object c and the sum of d and e gives the result as the same as the sum of the product of c and d and the product of c and e.
Now, what exactly these rules are doesn't very much interest me. We're going to be writing the language that will allow us to interpret these rules so that we can, in fact, make up whatever rules we like, another whole language of programming. Well, let's see. I haven't told you how we're going to do this. And, of course, for a while, we're going to work on that. But there's a real question of what is--what am I going to do at all at a large scale? How do these rules work? How is the simplifier program going to manipulate these rules with your expression to produce a reasonable answer?
Well, first, I'd like to think about these rules as being some sort of deck of them. So here I have a whole bunch of rules, right? Each rule-- here's a rule-- has a pattern and a skeleton. I'm trying to make up a control structure for this.
Now, what I have is a matcher, and I have something which is an instantiater. And I'm going to pass from the matcher to the instantiater some set of meaning for the pattern variables, a dictionary, I'll call it. A dictionary, which will say x was matched against the following subexpression and y was matched against another following subexpression. And from the instantiater, I will be making expressions, and they will go into the matcher. They will be expressions. And the patterns of the rules will be fed into the matcher, and the skeletons from the same rule will be fed into the instantiater.
Now, this is a little complicated because when you have something like an algebraic expression, where some--
the rules are intended to be able to allow you to substitute equal for equal. These are equal transformation rules.
So all subexpressions of the expression should be looked at. You give it an expression, this thing, and the rules
should be cycled around.
First of all, for every subexpression of the expression you feed in, all of the rules must be tried and looked at. And
if any rule matches, then this process occurs. The dictionary--the dictionary is to have some values in it. The
instantiater makes a new expression, which is basically replaces that part of the expression that was matched in
your original expression. And then, then, of course, we're going to recheck that, going to go around these rules
again, seeing if that could be simplified further. And then, then we're going to do that for every subexpression until
the thing no longer changes.
You can think of this as sort of an organic process. You've got some sort of stew, right? You've got bacteria or
something, or enzymes in some, in some gooey mess. And there's these--and these enzymes change things.
They attach to your expression, change it, and then they go away. And they have to match. The key-in-lock
phenomenon. They match, they change it, they go away. You can imagine it as a parallel process of some sort.
So you stick an expression into this mess, and after a while, you take it out, and it's been simplified. And it just
keeps changing until it no longer can be changed. But these enzymes can attach to any part of the, of the
expression.
OK, at this point, I'd like to stop and ask for questions. Yes.
AUDIENCE: This implies that the matching program and the instantiation program are separate programs; is that
right? Or is that-- they are.
PROFESSOR: They're separate little pieces. They fit together in a larger structure.
AUDIENCE: So I'm going through and matching and passing the information about what I matched to an
instantiater, which makes the changes. And then I pass that back to the matcher?
PROFESSOR: It won't make a change. It will make a new expression, which has, which has substituted the values
of the pattern variable that were matched on the left-hand side for the variables that are mentioned, the skeleton
variables or evaluation variables or whatever I called them, on the right-hand side.
AUDIENCE: And then that's passed back into the matcher?
PROFESSOR: Then this is going to go around again. This is going to go through this mess until it no longer
changes.
AUDIENCE: And it seems that there would be a danger of getting into a recursive loop.
PROFESSOR: Yes. Yes, if you do not write your rules nicely, you are-- indeed, in any programming language you invent, if it's sufficiently powerful to do anything, you can write programs that will go into infinite loops. And indeed, writing a program for doing algebraic manipulation for long will produce infinite loops. Go ahead.
AUDIENCE: Some language designers feel that this feature is so important that it should become part of the basic language, for example, scheme in this case. What are your thoughts on--
PROFESSOR: Which language feature?
AUDIENCE: The pairs matching. It's all application of such rules should be--
PROFESSOR: Oh, you mean like Prolog?
AUDIENCE: Like Prolog, but it becomes a more general--
PROFESSOR: It's possible. OK, I think my feeling about that is that I would like to teach you how to do it so you don't depend upon some language designer.
AUDIENCE: OK.
PROFESSOR: You make it yourself. You can roll your own. Thank you.
Well, let's see. Now we have to tell you how it works. It conveniently breaks up into various pieces. I'd like to look now at the matcher. The matcher has the following basic structure. It's a box that takes as its input an expression and a pattern, and it turns out a dictionary.
A dictionary, remember, is a mapping of pattern variables to the values that were found by matching, and it puts out another dictionary, which is the result of augmenting this dictionary by what was found in matching this expression against this pattern. So that's the matcher.
Now, this is a rather complicated program, and we can look at it on the overhead over here and see, ha, ha, it's very complicated. I just want you to look at the shape of it. It's too complicated to look at except in pieces. However, it's a fairly large, complicated program with a lot of sort of indented structure. At the largest scale-- you don't try to read those characters, but at the largest scale, you see that there is a case analysis, which is all these cases lined up. What we're now going to do is look at this in a bit more detail, attempting to understand how it works.
Let's go now to the first slide, showing some of the structure of the matcher at a large scale. And we see that the matcher, the matcher takes as its input a pattern, an expression, and a dictionary. And there is a case analysis here, which is made out of several cases, some of which have been left out over here, and the general case, which I'd like you to see.
Let's consider this general case. It's a very important pattern. The problem is that we have to examine two trees simultaneously. One of the trees is the tree of the expression, and the other is the tree of the pattern. We have to compare them with each other so that the subexpressions of the expression are matched against subexpressions of the pattern.
Looking at that in a bit more detail, suppose I had a pattern, a pattern, which was the sum of the product of a thing which we will call x and a thing which we will call y, and the sum of that, and the same thing we call y. So we're looking for a sum of a product whose second--whose second argument is the same as the second argument of the sum. That's a thing you might be looking for. Well, that, as a pattern, looks like this. There is a tree, which consists of a sum, and a product with a pattern variable question mark x and question mark y, the other pattern variable, and question mark y, just looking at the same, just writing down the list structure in a different way.
Now, suppose we were matching that against an expression which matches it, the sum of, say, the product of 3 and x and, say, x. That's another tree. It's the sum of the product of 3 and x and of x. So what I want to do is traverse these two trees simultaneously. And what I'd like to do is walk them like this. I'm going to say are these the same? This is a complicated object. Let's look at the left branches. Well, that could be the car. How does that look? Oh yes, the plus looks just fine. But the next thing here is a complicated thing. Let's look at that. Oh yes, that's pretty fine, too. They're both asterisks.
Now, whoops! My pattern variable, it matches against the 3. Remember, x equals 3 now. That's in my dictionary, and the dictionary's going to follow along with me: x equals three. Ah yes, x equals 3 and y equals x, different x. The pattern x is the expression x, the pattern y. Oh yes, the pattern variable y, I've already got a value for it. It's x. Is this an x? Oh yeah, sure it is. That's fine. Yep, done. I now have a dictionary, which I've accumulated by making this walk.
Well, now let's look at this general case here and see how that works. Here we have it. I take in a pattern variable-a pattern, an expression, and a dictionary. And now I'm going to do a complicated thing here, which is the general case. The expression is made out of two parts: a left and a right half, in general. Anything that's complicated is made out of two pieces in a Lisp system.
Well, now what do we have here? I'm going to match the car's of the two expressions against each other with respect to the dictionary I already have, producing a dictionary as its value, which I will then use for matching the
cdr's against each other. So that's how the dictionary travels, threads the entire structure. And then the result of that is the dictionary for the match of the car and the cdr, and that's what's going to be returned as a value.
Now, at any point, a match might fail. It may be the case, for example, if we go back and look at an expression that doesn't quite match, like supposing this was a 4. Well, now these two don't match any more, because the x that had to be-- sorry, the y that had to be x here and this y has to be 4. But x and 4 were not the same object syntactically. So this wouldn't match, and that would be rejected sometimes, so matches may fail.
Now, of course, because this matcher takes the dictionary from the previous match as input, it must be able to propagate the failures. And so that's what the first clause of this conditional does.
It's also true that if it turned out that the pattern was not atomic-- see, if the pattern was atomic, I'd go into this stuff, which we haven't looked at yet. But if the pattern is not atomic and the expression is atomic-- it's not made out of pieces-- then that must be a failure, and so we go over here. If the pattern is not atomic and the pattern is not a pattern variable-- I have to remind myself of that-- then we go over here. So that way, failures may occur.
OK, so now let's look at the insides of this thing. Well, the first place to look is what happens if I have an atomic pattern? That's very simple. A pattern that's not made out of any pieces: foo. That's a nice atomic pattern. Well, here's what we see. If the pattern is atomic, then if the expression is atomic, then if they are the same thing, then the dictionary I get is the same one as I had before. Nothing's changed. It's just that I matched plus against plus, asterisk against asterisk, x against x. That's all fine.
However, if the pattern is not the one which is the expression, if I have two separate atomic objects, then it was matching plus against asterisk, which case I fail. Or if it turns out that the pattern is atomic but the expression is complicated, it's not atomic, then I get a failure. That's very simple.
Now, what about the various kinds of pattern variables? We had three kinds. I give them the names. They're arbitrary constants, arbitrary variables, and arbitrary expressions. A question mark x is an arbitrary expression. A question mark cx is an arbitrary constant, and a question mark vx is an arbitrary variable.
Well, what do we do here? Looking at this, we see that if I have an arbitrary constant, if the pattern is an arbitrary constant, then it had better be the case that the expression had better be a constant. If the expression is not a constant, then that match fails. If it is a constant, however, then I wish to extend the dictionary. I wish to extend the dictionary with that pattern being remembered to be that expression using the old dictionary as a starting point.
So really, for arbitrary variables, I have to check first if the expression is a variable by matching against. If so, it's worth extending the dictionary so that the pattern is remembered to be matched against that expression, given the
original dictionary, and this makes a new dictionary.
Now, it has to check. There’s a sorts of failure inside extend dictionary, which is that-- if one of these pattern
variables already has a value and I’m trying to match the thing against something else which is not equivalent to
the one that I’ve already matched it against once, then a failure will come flying out of here, too. And I will see that
some time.
And finally, an arbitrary expression does not have to check anything syntactic about the expression that's being
matched, so all it does is it's an extension of the dictionary.
So you've just seen a complete, very simple matcher. Now, one of the things that's rather remarkable about this is
people pay an awful lot of money these days for someone to make a, quote, AI expert system that has nothing
more in it than a matcher and maybe an instantiater like this. But it's very easy to do, and now, of course, you can
start up a little start-up company and make a couple of megabucks in the next week taking some people for a ride.
20 years ago, this was remarkable, this kind of program. But now, this is sort of easy. You can teach it to
freshmen.
Well, now there's an instantiater as well. The problem is they're all going off and making more money than I do.
But that's always been true of universities. As expression, the purpose of the instantiater is to make expressions
given a dictionary and a skeleton. And that's not very hard at all. We'll see that very simply in the next, the next
slide here.
To instantiate a skeleton, given a particular dictionary-- oh, this is easy. We're going to do a recursive tree walk
over the skeleton. And for everything which is a skeleton variable-- I don't know, call it a skeleton evaluation.
That's the name and the abstract syntax that I give it in this program: a skeleton evaluation, a thing beginning with
a colon in the rules. For anything of that case, I'm going to look up the answer in the dictionary, and we'll worry
about that in a second. Let's look at this as a whole.
Here, I have-- I'm going to instantiate a skeleton, given a dictionary. Well, I'm going to define some internal loop
right there, and it's going to do something very simple. Even if a skeleton--even if a skeleton is simple and atomic,
in which case it's nothing more than giving the skeleton back as an answer, or in the general case, it's
complicated, in which case I'm going to make up the expression which is the result of instantiating-- calling this
loop recursively-- instantiating the car of the skeleton and the cdr.
So here is a recursive tree walk. However, if it turns out to be a skeleton evaluation, a colon expression in the
skeleton, then what I'm going to do is find the expression that's in the colon-- the CADR in this case. It's a piece of
abstract syntax here, so I can change my representation of rules. I'm going to evaluate that relative to this
dictionary, whatever evaluation means. We'll find out a lot about that sometime. And the result of that is my answer. So I start up this loop-- here's my initialization-- by calling it with the whole skeleton, and this will just do a recursive decomposition into pieces.
Now, one more little bit of detail is what happens inside evaluate? I can't tell you that in great detail. I'll tell you a little bit of it. Later, we're going to see--look into this in much more detail. To evaluate some form, some expression with respect to a dictionary, if the expression is an atomic object, well, I'm going to go look it up. Nothing very exciting there. Otherwise, I'm going to do something complicated here, which is I'm going to apply a procedure which is the result of looking up the operator part in something that we're going to find out about someday.
I want you to realize you're seeing magic now. This magic will become clear very soon, but not today. Then I'm looking at--looking up all the pieces, all the arguments to that in the dictionary. So I don't want you to look at this in detail. I want you to say that there's more going on here, and we're going to see more about this. But it's--the magic is going to stop. This part has to do with Lisp, and it's the end of that.
OK, so now we know about matching and instantiation. Are there any questions for this segment?
AUDIENCE: I have a question.
PROFESSOR: Yes.
AUDIENCE: Is it possible to bring up a previous slide? It's about this define match pattern.
PROFESSOR: Yes. You'd like to see the overall slide define match pattern. Can somebody put up the--no, the overhead. That's the biggest scale one. What part would you like to see?
AUDIENCE: Well, the top would be fine. Any of the parts where you're passing failed.
PROFESSOR: Yes.
AUDIENCE: The idea is to pass failed back to the dictionary; is that right?
PROFESSOR: The dictionary is the answer to a match, right? And it is either some mapping or there's no match. It doesn't match.
AUDIENCE: Right.
PROFESSOR: So what you're seeing over here is, in fact, because the fact that a match may have another match pass in the dictionary, as you see in the general case down here. Here's the general case where a match passes
another match to the dictionary. When I match the cdr's, I match them in the dictionary that is resulting from
matching the car's. OK, that's what I have here. So because of that, if the match of the car's fails, then it may be
necessary that the match of the cdr's propagates that failure, and that's what the first line is.
AUDIENCE: OK, well, I'm still unclear what matches-- what comes out of one instance of the match?
PROFESSOR: One of two possibilities. Either the symbol failed, which means there is no match.
AUDIENCE: Right.
PROFESSOR: Or some mapping, which is an abstract thing right now, and you should know about the structure of
it, which relates the pattern variables to their values as picked up in the match.
AUDIENCE: OK, so it is--
PROFESSOR: That's constructed by extend dictionary.
AUDIENCE: So the recursive nature brings about the fact that if ever a failed gets passed out of any calling of
match, then the first condition will pick it up--
PROFESSOR: And just propagate it along without any further ado, right.
AUDIENCE: Oh, right. OK.
PROFESSOR: That's just the fastest way to get that failure out of there. Yes.
AUDIENCE: If I don't fail, that means that I've matched a pattern, and I run the procedure extend dict and then
pass in the pattern in the expression. But the substitution will not be made at that point; is that right? I'm just--
PROFESSOR: No, no. There's no substitution being there because there's no skeleton to be substituted in.
AUDIENCE: Right. So what--
PROFESSOR: All you've got there is we're making up the dictionary for later substitution.
AUDIENCE: And what would the dictionary look like? Is it ordered pairs?
PROFESSOR: That's--that's not told to you. We're being abstract.
AUDIENCE: OK.
PROFESSOR: Why do you want to know? What it is, it's a function. It's a function.
AUDIENCE: Well, the reason I want to know is--
PROFESSOR: A function abstractly is a set of ordered pairs. It could be implemented as a set of list pairs. It could be implemented as some fancy table mechanism. It could be implemented as a function. And somehow, I'm building up a function. But I'm not telling you. That's up to George, who's going to build that later.
I know you really badly want to write concrete things. I'm not going to let you do that.
AUDIENCE: Well, let me at least ask, what is the important information there that's being passed to extend dict? I want to pass the pattern I found--
PROFESSOR: Yes. The pattern that's matched against the expression. You want to have the pattern, which happens to be in those cases pattern variables, right? All of those three cases for extend dict are pattern variables.
AUDIENCE: Right.
PROFESSOR: So you have a pattern variable that is to be given a value in a dictionary.
AUDIENCE: Mm-hmm.
PROFESSOR: The value is the expression that it matched against. The dictionary is the set of things I've already figured out that I have memorized or learned. And I am going to make a new dictionary, which is extended from the original one by having that pattern variable have a value with the new dictionary.
AUDIENCE: I guess what I don't understand is why can't the substitution be made right as soon as you find--
PROFESSOR: How do I know what I'm going to substitute? I don't know anything about this skeleton. This pattern, this matcher is an independent unit.
AUDIENCE: Oh, I see. OK.
PROFESSOR: Right?
AUDIENCE: Yeah.
PROFESSOR: I take the matcher. I apply the matcher. If it matches, then it was worth doing instantiation.
AUDIENCE: OK, good. Yeah.
PROFESSOR: OK?
AUDIENCE: Can you just do that answer again using that example on the board? You know, what you just passed back to the matcher.
PROFESSOR: Oh yes. OK, yes. You're looking at this example. At this point when I'm traversing this structure, I get to here: x. I have some dictionary, presumably an empty dictionary at this point if this is the whole expression. So I have an empty dictionary, and I've matched x against 3. So now, after this point, the dictionary contains x is 3, OK?
Now, I continue walking along here. I see y. Now, this is a particular x, a pattern x. I see y, a pattern y. The dictionary says, oh yes, the pattern y is the symbol x because I've got a match there. So the dictionary now contains at this point two entries. The pattern x is 3, and the pattern y is the expression x. Now, I get that, I can walk along further. I say, oh, pattern y also wants to be 4. But that isn't possible, producing a failure. Thank you. Let's take a break.
OK, you're seeing your first very big and hairy program. Now, of course, one of the goals of this subsegment is to get you to be able to read something like this and not be afraid of it. This one's only about four pages of code. By the end of the subject, I hope a 50-page program will not look particularly frightening. But I don't expect-- and I don't want you to think that I expect you to be getting it as it's coming out. You're supposed to feel the flavor of this, OK? And then you're supposed to think about it because it is a big program. There's a lot of stuff inside this program.
Now, I've told you about the language we're implementing, the pattern match substitution language. I showed you some rules. And I've told you about matching and instantiation, which are the two halves of how a rule works. Now we have to understand the control structure by which the rules are applied to the expressions so as to do algebraic simplification.
Now, that's also a big complicated mess. The problem is that there is a variety of interlocking, interwoven loops, if you will, involved in this. For one thing, I have to apply-- I have to examine every subexpression of my expression that I'm trying to simplify. That we know how to do. It's a car cdr recursion of some sort, or something like that, and some sort of tree walk. And that's going to be happening.
Now, for every such place, every node that I get to in doing my traversal of the expression I'm trying to simplify, I want to apply all of the rules. Every rule is going to look at every node. I'm going to rotate the rules around.
Now, either a rule will or will not match. If the rule does not match, then it's not very interesting. If the rule does match, then I'm going to replace that node in the expression by an alternate expression. I'm actually going to
make a new expression, which contains-- everything contains that new value, the result of substituting into the skeleton, instantiating the skeleton for that rule at this level. But no one knows whether that thing that I instantiated there is in simplified form. So we're going to have to simplify that, somehow to call the simplifier on the thing that I just constructed. And then when that's done, then I sort of can build that into the expression I want as my answer.
Now, there is a basic idea here, which I will call a garbage-in, garbage-out simplifier. It's a kind of recursive simplifier. And what happens is the way you simplify something is that simple objects like variables are simple. Compound objects, well, I don't know. What I'm going to do is I'm going to build up from simple objects, trying to make simple things by assuming that the pieces they're made out of are simple. That's what's happening here.
Well, now, if we look at the first slide-- no, overhead, overhead. If we look at the overhead, we see a very complicated program like we saw before for the matcher, so complicated that you can't read it like that. I just want you to get the feel of the shape of it, and the shape of it is that this program has various subprograms in it. One of them--this part is the part for traversing the expression, and this part is the part for trying rules.
Now, of course, we can look at that in some more detail. Let's look at--let's look at the first transparency, right? The simplifier is made out of several parts. Now, remember at the very beginning, the simplifier is the thing which takes a rules--a set of rules and produces a program which will simplify it relative to them.
So here we have our simplifier. It takes a rule set. And in the context where that rule set is defined, there are various other definitions that are done here. And then the result of this simplifier procedure is, in fact, one of the procedures that was defined. Simplify x. What I'm returning as the value of calling the simplifier on a set of rules is a procedure, the simplify x procedure, which is defined in that context, which is a simplification procedure appropriate for using those set of rules. That's what I have there.
Now, the first two of these procedures, this one and this one, are together going to be the recursive traversal of an expression. This one is the general simplification for any expression, and this is the thing which simplifies a list of parts of an expression. Nothing more. For each of those, we're going to do something complicated, which involves trying the rules.
Now, we should look at the various parts. Well let's look first at the recursive traversal of an expression. And this is done in a sort of simple way. This is a little nest of recursive procedures. And what we have here are two procedures-- one for simplifying an expression, and one for simplifying parts of an expression. And the way this works is very simple. If the expression I'm trying to simplify is a compound expression, I'm going to simplify all the parts of it. And that's calling--that procedure, simplify parts, is going to make up a new expression with all the parts simplified, which I'm then going to try the rules on over here.
If it turns out that the expression is not compound, if it's simple, like just a symbol or something like pi, then in any case, I'm going to try the rules on it because it might be that I want in my set of rules to expand pi to 3.14159265358979, dot, dot, dot. But I may not. But there is no reason not to do it.
Now, if I want to simplify the parts, well, that's easy too. Either the expression is an empty one, there's no more parts, in which case I have the empty expression. Otherwise, I'm going to make a new expression by cons, which is the result of simplifying the first part of the expression, the car, and simplifying the rest of the expression, which is the cdr.
Now, the reason why I'm showing you this sort of stuff this way is because I want you get the feeling for the various patterns that are very important when writing programs. And this could be written a different way. There's another way to write simplified expressions so there would be only one of them. There would only be one little procedure here. Let me just write that on the blackboard to give you a feeling for that.
This in another idiom, if you will. To simplify an expression called x, what am I going to do? I'm going to try the rules on the following situation. If-- on the following expression-- compound, just like we had before. If the expression is compound, well, what am I going to do? I'm going to simplify all the parts. But I already have a cdr recursion, a common pattern of usage, which has been captured as a high-order procedure. It's called map. So I'll just write that here.
Map simplify the expression, all the parts of the expression. This says apply the simplification operation, which is this one, every part of the expression, and then that cuts those up into a list. It's every element of the list which the expression is assumed to be made out of, and otherwise, I have the expression. So I don't need the helper procedure, simplify parts, because that's really this. So sometimes, you just write it this way. It doesn't matter very much.
Well, now let's take a look at-- let's just look at how you try rules. If you look at this slide, we see this is a complicated mess also. I'm trying rules on an expression. It turns out the expression I'm trying it on is some subexpression now of the expression I started with. Because the thing I just arranged allowed us to try every subexpression.
So now here we're taking in a subexpression of the expression we started with. That's what this is. And what we're going to define here is a procedure called scan, which is going to try every rule. And we're going to start it up on the whole set of rules. This is going to go cdr-ing down the rules, if you will, looking for a rule to apply. And when it finds one, it'll do the job.
Well, let's take a look at how try rules works. It's very simple: the scan rules. Scan rules, the way of scanning. Well,
is it so simple? It's a big program, of course. We take a bunch of rules, which is a sublist of the list of rules. We've tried some of them already, and they've not been appropriate, so we get to some here. We get to move to the next one. If there are no more rules, well then, there's nothing I can do with this expression, and it's simplified.
However, if it turns out that there are still rules to be done, then let's match the pattern of the first rule against the expression using the empty dictionary to start with and use that as the dictionary. If that happens to be a failure, try the rest of the rules. That's all it says here. It says discard that rule. Otherwise, well, I'm going to get the skeleton of the first rule, instantiate that relative to the dictionary, and simplify the result, and that's the expression I want.
So although that was a complicated program, every complicated program is made out of a lot of simple pieces. Now, the pattern of recursions here is very complicated. And one of the most important things is not to think about that. If you try to think about the actual pattern by which this does something, you're going to get very confused. I would. This is not a matter of you can do this with practice. These patterns are hard. But you don't have to think about it. The key to this-- it's very good programming and very good design-- is to know what not to think about.
The fact is, going back to this slide, I don't have to think about it because I have specifications in my mind for what simplify x does. I don't have to know how it does it. And it may, in fact, call scan somehow through try rules, which it does. And somehow, I've got another recursion going on here. But since I know that simplify x is assumed by wishful thinking to produce the simplified result, then I don't have to think about it anymore. I've used it. I've used it in a reasonable way. I will get a reasonable answer. And you have to learn how to program that way-- with abandon.
Well, there's very little left of this thing. All there is left is a few details associated with what a dictionary is. And those of you who've been itching to know what a dictionary is, well, I will flip it up and not tell you anything about it. Dictionaries are easy. It's represented in terms of something else called an A list, which is a particular pattern of usage for making tables in lists. They're easy. They're made out of pairs, as was asked a bit ago. And there are special procedures for dealing with such things called assq, and you can find them in manuals.
I'm not terribly excited about it. The only interesting thing here in extend dictionary is I have to extend the dictionary with a pattern, a datum, and a dictionary. This pattern is, in fact, at this point a pattern variable. And what do I want to do? I want to pull out the name of that pattern variable, the pattern variable name, and I'm going to look up in the dictionary and see if it already has a value. If not, I'm going to add a new one in. If it does have one, if it has a value, then it had better be equal to the one that was already stored away. And if that's the case, the dictionary is what I expected it to be. Otherwise, I fail. So that's easy, too. If you open up any program, you're going to find inside of it lots of little pieces, all of which are easy.
So at this point, I suppose, I've just told you some million-dollar valuable information. And I suppose at this point
we're pretty much done with this program. I'd like to ask about questions.
AUDIENCE: Yes, can you give me the words that describe the specification for a simplified expression?
PROFESSOR: Sure. A simplified expression takes an expression and produces a simplified expression. That's it,
OK? How it does it is very easy. In compound expressions, all the pieces are simplified, and then the rules are
tried on the result. And for simple expressions, you just try all the rules.
AUDIENCE: So an expression is simplified by virtue of the rules?
PROFESSOR: That's, of course, true.
AUDIENCE: Right.
PROFESSOR: And the way this works is that simplifi expression, as you see here, what it does is it breaks the
expression down into the smallest pieces, simplifies building up from the bottom using the rules to be the
simplifier, to do the manipulations, and constructs a new expression as the result. Eventually, one of things you
see is that the rules themselves, the try rules, call a simplified expression on the results when it changes
something, the results of a match. I'm sorry, the results of instantiation of a skeleton for a rule that has matched.
So the spec of a simplified expression is that any expression you put into it comes out simplified according to
those rules. Thank you. Let's take a break.
|
{"Source-Url": "https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/4a-pattern-matching-and-rule-based-substitution/fXQ1SwKjDg.pdf", "len_cl100k_base": 10792, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 33874, "total-output-tokens": 11525, "length": "2e13", "weborganizer": {"__label__adult": 0.0004239082336425781, "__label__art_design": 0.0006213188171386719, "__label__crime_law": 0.0003020763397216797, "__label__education_jobs": 0.00720977783203125, "__label__entertainment": 0.00015115737915039062, "__label__fashion_beauty": 0.0001856088638305664, "__label__finance_business": 0.0002290010452270508, "__label__food_dining": 0.0005660057067871094, "__label__games": 0.0008273124694824219, "__label__hardware": 0.00115966796875, "__label__health": 0.0003917217254638672, "__label__history": 0.00029015541076660156, "__label__home_hobbies": 0.00017905235290527344, "__label__industrial": 0.0006771087646484375, "__label__literature": 0.0008172988891601562, "__label__politics": 0.0002601146697998047, "__label__religion": 0.0007114410400390625, "__label__science_tech": 0.0231781005859375, "__label__social_life": 0.00018703937530517575, "__label__software": 0.00702667236328125, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.00031065940856933594, "__label__transportation": 0.0007386207580566406, "__label__travel": 0.00023853778839111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46222, 0.00079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46222, 0.10537]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46222, 0.97003]], "google_gemma-3-12b-it_contains_pii": [[0, 3087, false], [3087, 6283, null], [6283, 9557, null], [9557, 12652, null], [12652, 15246, null], [15246, 17445, null], [17445, 20562, null], [20562, 23751, null], [23751, 26668, null], [26668, 28914, null], [28914, 30757, null], [30757, 32482, null], [32482, 35288, null], [35288, 38535, null], [38535, 41445, null], [41445, 44791, null], [44791, 46222, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3087, true], [3087, 6283, null], [6283, 9557, null], [9557, 12652, null], [12652, 15246, null], [15246, 17445, null], [17445, 20562, null], [20562, 23751, null], [23751, 26668, null], [26668, 28914, null], [28914, 30757, null], [30757, 32482, null], [32482, 35288, null], [35288, 38535, null], [38535, 41445, null], [41445, 44791, null], [44791, 46222, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46222, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46222, null]], "pdf_page_numbers": [[0, 3087, 1], [3087, 6283, 2], [6283, 9557, 3], [9557, 12652, 4], [12652, 15246, 5], [15246, 17445, 6], [17445, 20562, 7], [20562, 23751, 8], [23751, 26668, 9], [26668, 28914, 10], [28914, 30757, 11], [30757, 32482, 12], [32482, 35288, 13], [35288, 38535, 14], [38535, 41445, 15], [41445, 44791, 16], [44791, 46222, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46222, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
04c8b80a5d0ffbae4b2548fa0a208771cc6a9b09
|
Exploring the Influence of Identifier Names on Code Quality: An empirical study
Conference or Workshop Item
How to cite:
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
Exploring the Influence of Identifier Names on Code Quality: an empirical study
Simon Butler, Michel Wermelinger, Yijun Yu, Helen Sharp
Centre for Research in Computing, The Open University, Milton Keynes, UK
Abstract—Given the importance of identifier names and the value of naming conventions to program comprehension, we speculated in previous work whether a connection exists between the quality of identifier names and software quality. We found that flawed identifiers in Java classes were associated with source code found to be of low quality by static analysis. This paper extends that work in three directions. First, we show that the association also holds at the finer granularity level of Java methods. This in turn makes it possible to, secondly, apply existing method-level quality and readability metrics, and see that flawed identifiers still impact on this richer notion of code quality and comprehension. Third, we check whether the association can be used in a practical way. We adopt techniques used to evaluate medical diagnostic tests in order to identify which particular identifier naming flaws could be used as a light-weight diagnostic of potentially problematic Java source code for maintenance.
Keywords-programming; software metrics; software quality;
I. INTRODUCTION
Identifier names constitute the majority of tokens in source code [1] and are the primary source of conceptual information for program comprehension [2]. Identifier names are created by designers and programmers and reflect their understanding, cognition and idiosyncrasies [3]. The impact of low quality identifier names on program comprehension is reasonably well understood [1], [4], [5], but little is known about the extent to which the quality of identifier names might influence the quality of source code.
Given that poor quality identifier names are a barrier to program comprehension, and that they may indicate a lack of understanding of the problem, or the solution articulated in the source code, we hypothesise that poor quality identifier names indicate poor quality source code that translates into poor quality software. In previous work [6], we showed connections between flawed identifier names and FindBugs warnings [7] in Java classes. In this paper we expand on our previous work by investigating the quality of identifiers and source code in Java methods. At this finer-grained level of analysis we employ the cyclomatic complexity metric [8] and the maintainability index [9] to evaluate the quality of source code. In addition, we evaluate the readability of methods using a readability metric [10]. We also repeat our investigation of source code quality with FindBugs warnings [7] with the expectation of finding more focused results because the class-level specific FindBugs warnings included in our previous work, are excluded from this study at the method level. We also seek to verify the link between the readability of source code and FindBugs warnings found by Buse and Weimer [10]. In addition, we explore whether our findings may be applied as a low-cost heuristic to identify potentially problematic regions of source code.
The remainder of this paper is structured as follows: in Section II we examine related work before turning in Sections III and IV to the underlying concepts of identifier and source code quality used in this paper. We give an account of our methodology in Section V and report our results in Section VI. In Sections VII and VIII we discuss our results and draw conclusions.
II. RELATED WORK
Existing research on source code readability focuses on the contribution the components of source code make to readability [10], and the way in which the semantic content of identifiers contributes to readability and program comprehension [1], [5], [2].
A longitudinal study of identifier names by Lawrie et al. [4] showed that identifier name quality has improved during the last thirty years. The same study also found that identifiers in proprietary source code typically contained more domain-specific abbreviations than open source code. However, the study also found that identifiers change little following the initial period of software development. This is confirmed by Antoniol et al. [11] who also argue that programmers may be more reluctant to change identifier names than source code, because of the lack of tool support for managing identifier names. In [5], Lawrie et al. detail an empirical study which found identifier names composed of dictionary words were more easily understood than those composed of abbreviations or single letters.
Rajlich and Wilde emphasise the importance of identifiers as the primary source of conceptual information for program comprehension [2]. Deissenboeck and Pizka [1] developed a formal model for the semantics of identifier names in which each concept is represented by just one identifier throughout a program. The model excludes the use of homonyms and synonyms, thus reducing the opportunities for confusion. The authors found the model to be an effective tool for resolving difficulties with identifier names found during program development.
A study of the morphological and grammatical features of identifier names in C, C++, Java and C# by Liblit et al. [12] found that identifiers are best understood within their
working context. Instance variables, for example, are coupled with method names in object-oriented languages, and method names are often conceived with this relationship in mind. Field and variable names have grammatical structures that reflect their independence. The grammatical structure of method names is further differentiated by the need to reflect the action the method performs and whether it has side effects, or takes one or more arguments.
Relf [13] identified a set of cross-language identifier naming style guidelines from the programming literature, and investigated their acceptance by programmers in an empirical study. Relf implemented the naming style guidelines in a tool to help programmers create good quality identifiers and to refactor existing identifiers [14]. Abebe et al. [15] developed a system to recognise ‘lexicon bad smells’ – grammatical and other flaws – in identifiers, thereby identifying identifier names for possible refactoring.
Little work, however, has been done to explore the possible connections between identifier naming, source code readability and software quality.
Two studies by Booger and Moonen [16], [17] applied the MISRA-C: 2004 coding standard [18] to measure the quality of source code before and after bug fixes during the development of two closed source embedded C applications. They found that while compliance with some of the rules increased as defects were fixed, bug fixes also introduced violations of other rules. In other words, code with fewer defects, and hence of higher quality, is deemed to be of lower quality by some of the coding rules. The authors also found that though they could identify rules with a positive influence on software quality in each of the two studies, the rules did not have consistent effects, including the four rules related to identifiers common to both studies.
Buse and Weimer [10] developed a readability metric for Java derived from measurements of, among others, the number of parentheses and braces, line length, the number of blank lines, and the number, frequency and length of identifiers. Using machine learning, the readability metric was trained to agree with the judgement of human source code readers. Buse and Weimer found a significant statistical relationship between the readability of methods and the presence of defects found by FindBugs [7] in open source code bases. Although their work makes a link between readability and software quality, their notion of readability ignores the quality of identifier names.
In work classifying the lexicon used in Java method identifiers, Høst and Østwold advance the idea that, because of the effort required to select a good identifier name, identifiers reflect the cognitive processes of programmers and designers [3]. Consequently, identifiers may then reflect the misunderstandings of the creator of the identifier and misdirect the readers of source code.
The existing literature establishes the need for good identifier names to support program comprehension. However, only tentative steps have been taken to demonstrate their relationship to source code quality. In previous work [6], we explored the relationship between flawed identifiers and FindBugs defects in Java classes. We found some relationships, which we explore further in this paper with finer-grained analysis, and by increasing the number of metrics used to measure source code quality.
III. IDENTIFIER QUALITY
The multifactorial nature of identifier quality makes measurement problematic. For the purposes of this study we constrained our measurement of identifier quality to typography and the use of known natural language elements, and ignored detailed assessments of semantic content and the use of grammar. Rather than apply an arbitrary set of rules derived from a single set of programming conventions, we used a set of empirically evaluated identifier naming guidelines.
Relf derived a set of twenty-one identifier naming style guidelines for Ada and Java from the programming literature [13]. Most of the guidelines, which were evaluated in an empirical study, do not deviate significantly from the Java identifier naming conventions [19], [20] and as they have been developed in other widely used conventions [21].
Relf’s identifier naming style guidelines combine typography and a simple approach to natural language, but were not intended to be used as rules to evaluate the quality of identifier names. Accordingly we found it necessary to update some guidelines to define more precisely what was not permitted, and renamed some to reflect the prescriptive sense in which we applied them.
We implemented a subset of Relf’s guidelines as tests. The remaining guidelines were not adopted because either they do not reflect recent changes in Java programming practice, or they are general guidelines of good practice that are difficult to derive practical prescriptive rules from. For example, Relf defines the Same Words guideline as prohibiting the use of identifiers composed of the same words, but in a different order. Whilst superficially attractive, a rule based on this guideline prohibits clear names for reciprocal operations (e.g. htmlToXml and xmlToHtml) and pairs of words that create semantically distinct identifiers (e.g. indexPage and pageIndex). Generally, the implementation of each guideline is apparent from its name, and is described and illustrated in Table I. However, the precise implementation of some guidelines requires further explanation:
Capitalisation Anomaly: For identifiers other than constants we test for capitalisation of only the initial letter of acronyms as prescribed in [19], [20], i.e. only the initial letter of a component word is capitalised either at word boundaries, or the beginning of the identifier, if appropriate.
Non-Dictionary Words: We defined a dictionary word as belonging to the English language, because all the projects investigated are developed in English. We constructed a dictionary consisting of some 117,000 words, including
inflections and American and Canadian English spelling variations, using word lists from the SCOWL package up to size 70, the largest lists consisting of words commonly found in published dictionaries [22]. We added a further 90 common computing and Java terms, e.g., ‘arity’, ‘hostname’, ‘symlink’, and ‘throwable’. A separate dictionary of abbreviations was constructed, using the criterion that “the abbreviation is much more widely used than the unabbreviated form.” The three metric maintainability index [9] that reflects the influence of the programmer on source code quality is a somewhat controversial metric [23], but believe that it provides an indication of source code complexity sufficient for our purposes.
A concern is that development teams may use project, or domain, specific abbreviations and terms, which are not in our dictionary, yet are well understood by the programmers. To address the issue we created additional dictionaries for each application of unrecognised component words that were used in three, five and ten or more unique identifiers. For example, an unrecognised word or abbreviation used in ten or more unique identifiers may be inferred to be a commonly understood term. The frequencies of three, five and ten are arbitrary, but may be seen as representative of the familiarity the development team might have with a given term. Following the creation of the dictionaries, each identifier was tested again for compliance to the Non-Dictionary Words guideline by using a combination of the main dictionary, the abbreviation dictionary, and each of the dictionaries of application-specific words and abbreviations.
Excessive Words: Relf’s Number of Words guideline was intended to encourage programmers to create identifiers between two and four words long. In applying the guideline as a prescriptive rule both identifiers composed of one word and those composed of five or more words are categorised together, which does not allow us to determine the contribution made by the occurrence of either. The issue is addressed, in part, by the creation of an Excessive Words flaw, defined in Table I, which determines identifiers of five or more words to be flawed.
Short Identifier Name: We updated Relf’s guideline to include more single letter and short identifiers commonly used in Java [20], [21] (see Table I).
IV. SOURCE CODE QUALITY
Our objective is to measure source code quality in a way that reflects the influence of the programmer on source code and the possible impact on the reader. We used cyclomatic complexity [8] and the three metric maintainability index [9] to measure the quality of Java methods. Additionally, Buse and Weimer’s readability metric (see Section II) was used to provide assessments of the readability of methods. We also used FindBugs to analyse the bytecode of each application for any potential defects.
Cyclomatic complexity provides a ready assessment of the complexity of a method in terms of the number of possible execution paths. We acknowledge that cyclomatic complexity is a somewhat controversial metric [23], but believe that it provides an indication of source code complexity sufficient for our purposes.
The three metric maintainability index (MI) [9] is given by:
\[ MI = 171 - 5.2 \times \ln(HV) - 0.23 \times V(G) - 16.2 \times \ln(LOC) \]
where LOC is the number of lines of code, V(G) is the cyclomatic complexity and HV is the Halstead Volume [24], a source code metric determined by the number of operators and operands used, including identifiers. The Halstead volume is the product of the Halstead Vocabulary and the logarithm of the Halstead Length. The Halstead Vocabulary
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
<th>Example of flawed identifier(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Capitalisation Anomaly</td>
<td>Identifiers should be appropriately capitalised.</td>
<td>HTMLEditorKit, pagecounter, fooBAR</td>
</tr>
<tr>
<td>Excessive Words</td>
<td>Identifier names should be composed of no more than four words or abbreviations.</td>
<td>floatToRawIntBits()</td>
</tr>
<tr>
<td>External Underscores</td>
<td>Identifiers should not have either leading or trailing underscores.</td>
<td><em>foo</em></td>
</tr>
<tr>
<td>Long Identifier Name</td>
<td>Identifier names of more than twenty-five characters should be avoided.</td>
<td>getPolicyQualifiersRejected</td>
</tr>
<tr>
<td>Naming Convention Anomaly</td>
<td>Identifiers should not consist of non-standard mixes of upper and lower case characters.</td>
<td>FOO_bar</td>
</tr>
<tr>
<td>Non-Dictionary Words</td>
<td>Identifier names should be composed of words found in the dictionary and abbreviations and acronyms that are more commonly understood than the unabbreviated form.</td>
<td>strlen</td>
</tr>
<tr>
<td>Number of Words</td>
<td>Identifiers should be composed of between two and four words.</td>
<td>ArrayOutOfBoundsException, name</td>
</tr>
<tr>
<td>Numeric Identifier Name</td>
<td>Identifiers should not be composed entirely of numeric words or numeric words and numbers.</td>
<td>FORTY_TWO</td>
</tr>
<tr>
<td>Short Identifier Name</td>
<td>Identifiers should not consist of fewer than eight characters, with the exception of c, d, e, g, i, in, inOut, j, k, m, n, o, out, t, x, y, z</td>
<td>name</td>
</tr>
<tr>
<td>Type Encoding</td>
<td>Type information should not be encoded in identifier names using Hungarian notation or similar</td>
<td>iCount</td>
</tr>
</tbody>
</table>
Table I THE IDENTIFIER NAMING STYLE GUIDELINES APPLIED
is the number of unique operators and unique operands, and the Halstead Length is the sum of the number of operators and operands. By incorporating the Halstead Vocabulary, the MI is influenced by the complexity of a unit of source code in terms of the number of identifiers required to implement a solution.
FindBugs is a static analysis tool for Java that analyses bytecode for ‘bug patterns’. The type of defects identified by the bug patterns range from dereferences of null pointers, which may halt program execution, to Java specific problems associated with an incomplete understanding of the Java language [25]. The latter class of defects include code constructs likely to increase the maintenance effort and code constructs that may have unintended side-effects. FindBugs was used extensively during two days in May 2009 at Google, and software engineers found some 4,000 significant issues with Java source code as a result [7]. While we accept that FindBugs creates false positives, as does any static analysis tool, we feel that FindBugs’ perspective on source code quality is suitable for our needs.
V. METHODOLOGY
A. Data Collection
We selected eight established Java open source projects for investigation, including GUI applications, programmers’ tools, and libraries. The particular projects were chosen to reduce the potential influence of domain and project-specific factors in this study. Table II shows the version and number of methods analysed for each project.
<table>
<thead>
<tr>
<th>Project</th>
<th>Version</th>
<th>Methods</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ant</td>
<td>1.71</td>
<td>9146</td>
</tr>
<tr>
<td>Cactus</td>
<td>1.8.0</td>
<td>926</td>
</tr>
<tr>
<td>Freemind</td>
<td>0.9.0 Beta 20</td>
<td>4883</td>
</tr>
<tr>
<td>Hibernate Core</td>
<td>3.3.1</td>
<td>12309</td>
</tr>
<tr>
<td>JasperReports</td>
<td>3.1.2</td>
<td>12349</td>
</tr>
<tr>
<td>jEdit</td>
<td>4.3 pre16</td>
<td>5835</td>
</tr>
<tr>
<td>JFreeChart</td>
<td>1.0.11</td>
<td>8230</td>
</tr>
<tr>
<td>Tomcat</td>
<td>6.0.18</td>
<td>11394</td>
</tr>
</tbody>
</table>
We developed a tool to automate the extraction and analysis of identifiers from Java source code. Java files were parsed and identifiers analysed on the parse tree to establish adherence to the typographical rules for their context, e.g. method names starting with a lowercase character. Then, identifiers were extracted and added to a central store, with information about their location, and divided into hard words – their component words and abbreviations – using the conventional Java word boundaries of internal capitalisation and underscores. Identifiers were then analysed by our tool for conformance to Relf’s guidelines in Table I, our own Excessive Words guideline, and the Non-Dictionary Words guideline where the dictionary is extended by a set of commonly used hard words.
Where subject applications were found to contain source code files generated by parser generators, or to incorporate source code from third party libraries, those files were ignored to try to ensure only source code written by the applications’ development teams was analysed.
We collected the primitive Halstead metrics for each method – counts of operators and operands – by adapting the standard developed for C by Munson [23] and applying it to Java in our tool. We also recorded McCabe’s cyclomatic complexity (V(G)) [8] and LOC for each method, to compute the maintainability index. To create a binary classifier from the maintainability index we used the threshold of 65, established by empirical study [9], to identify methods as ‘more-maintainable’ and ‘less-maintainable’.
The readability of source code was evaluated using a readability metric tool developed by Buse and Weimer [10]. The readability metric follows a bimodal distribution and is interpreted as binary classifier that identifies source code as ‘more-readable’ or ‘less-readable’.
We also applied the cyclomatic complexity metric as a binary classifier. The popular programming literature often advocates that programmers take steps to keep the cyclomatic complexity of individual methods low. Some texts suggest refactoring should be considered when cyclomatic complexity is six or more, and that the cyclomatic complexity of a method should not exceed ten [26]. It is outside the scope of our study to examine the merits of such practices or the justification for the chosen thresholds. However, to create binary classifiers from the cyclomatic complexity metric, we adopted thresholds of six and ten to represent methods of moderate and high complexity. This provides two binary classifiers distinguishing between methods with low complexity and those with a cyclomatic complexity of six or more, and between methods with low to moderate complexity and those with a cyclomatic complexity of ten or more.
For the purposes of this study we recorded details for methods that constitute discrete readable units to ensure that the readability metric assessed source code as the human reader would see it. Java source code files contain one or more top-level classes, each of which may contain member classes. Both types of classes may contain methods. We recorded as methods, only methods contained either by top-level classes or by member classes directly contained by top-level classes. Any local and anonymous classes contained within those methods were recorded as part of the containing method and not separately. For example, if a method contains an anonymous class, the total cyclomatic complexity for the anonymous class is added to the cyclomatic complexity of the containing method.
The Java archive (JAR) files resulting from the compilation of the source code were analysed with FindBugs.
FindBugs employs a heuristic to determine the severity of the defects it finds and, in its default mode, issues ‘priority one’ and ‘priority two’ warnings, with priority one deemed the more serious. Counts of priority one and priority two warnings were recorded for each method. We used the default settings for FindBugs with the exception of a filter to exclude warnings of the use of unconventional capitalisation of the first letter in class, method and field names, which would overlap with the findings of our tool. We also filtered out the ‘Dead Local Store’ warning, which can result from the actions of the Java compiler. We found that FindBugs warnings are sparsely distributed in Java methods and used the presence of a FindBugs warning as a binary classifier.
The identifier naming and metrics data collected for each Java method was stored in XML files and collated with the XML output of FindBugs and the readability metric tool, using a tool we developed. Data extracted from the source code was matched with classes recorded by FindBugs to ensure that only identifiers from classes compiled into the JAR files were analysed. The collated data for each method was then written to R [27] dataframes for statistical analysis.
### B. Statistical Analysis
For each pair of binary classifiers, a contingency table like Table III was created using R, and the chi-squared ($\chi^2$) test [28] was performed, with the null hypothesis that the binary classifiers were independent. For Table III the value of $\chi^2$ is 81.2, which is statistically significant ($p = 2 \times 10^{-19}$). For each contingency table, a table of expected values was derived from the marginal totals to help determine the nature of any association. In Table III our interest lies in the top-left cell; if the observed frequency exceeds the expected frequency then there is a statistically significant association between the presence of identifiers with the Non-Dictionary Words flaw and FindBugs Priority Two warnings in a method. The expected value for the top-left cell is the product of the sum of the observed values in the lefthand column and the top row divided by the total population, i.e. $(103 + 37) \times (103 + 2925) \div (103 + 37 + 2925 + 5165) = 51.5$, which is less than observed frequency of 103. Where any of the expected frequencies for a contingency table were less than five, the Fisher exact test [28] was used.
#### Table III: Example Contingency Table
<table>
<thead>
<tr>
<th>JFreeChart Non-Dictionary Words</th>
<th>FindBugs Priority Two Warnings</th>
</tr>
</thead>
<tbody>
<tr>
<td>methods with</td>
<td>methods with</td>
</tr>
<tr>
<td>103</td>
<td>2925</td>
</tr>
<tr>
<td>37</td>
<td>5165</td>
</tr>
</tbody>
</table>
In addition to the $\chi^2$ tests, we applied a technique used in medicine to evaluate diagnostic tests to determine whether the observed phenomena have a practical application. The same contingency tables used for the $\chi^2$ tests were analysed by treating FindBugs warnings, the maintainability index, cyclomatic complexity and readability as reference classifiers. For example, for the contingency table above (Table III) we take the occurrence of FindBugs priority two warnings in methods as the reference classifier, and test to see how well the Non-Dictionary Words flaw performs as a classifier in comparison.
To evaluate the relative performance of the test classifier, two quantities are derived from the contingency table: the sensitivity and the specificity, which represent agreement between the two classifiers. The sensitivity is the proportion of the population classified as positive by the reference classifier that are classified positively by the classifier being tested. In our example in Table III, the sensitivity is the proportion of methods for which FindBugs warnings are issued, that also contain identifiers with the Non-Dictionary Words flaw; i.e. $sensitivity = 103 \div (103 + 37) = 0.74$. The specificity is the proportion of population classified negatively by the reference classifier that are also classified negatively by the test classifier. In Table III, the specificity is the proportion of the methods without FindBugs priority two warnings that have no identifiers with the Non-Dictionary Words flaw: $specificity = 5165 \div (2925 + 5165) = 0.64$. An advantage of this method is that sensitivity and specificity are independent of the rate of incidence, or prevalence, of the phenomenon being investigated.
The characteristics of a given test can be illustrated using receiver operating characteristic (ROC) curves, where the sensitivity of a test is plotted on the y-axis, against $1 - specificity$ (the false positive rate) on the x-axis. The area under the curve (AUC) (see Figure 1) indicates the efficacy of the test. A useless test, one that is equivalent to guessing, is indicated by a diagonal line drawn from the origin to the top-right corner, representing the equation $sensitivity = 1 - specificity$, which has an AUC of 0.5. For a test to be useful the points plotted should lie above and to the left of the diagonal line. We use the ROC graphs as a means of visualising the predictive power of the observed associations.
Our example results in a point at $(0.36, 0.74)$, above and to the left of the diagonal, meaning that, in the case of JFreeChart, using the Non-Dictionary Word flaw as a binary classifier is a better than chance method of predicting the presence or absence of FindBugs priority two warnings. The predictive power of a result is related to its perpendicular distance from the diagonal line, and is equal to the area under a line drawn from the origin to the point representing the result and from the result to the point $(1, 1)$. In our example, the predictive power is 0.69, which means that the Non-Dictionary Word flaw has a 0.69 probability of indicating whether or not a method contains a FindBugs Priority two warning in JFreeChart.
The majority of methods in JFreeChart are correctly classified by the test classifier and are grouped in the top-
left and bottom-right cells of Table III. As we will see in the next section, especially for Cactus, it is possible for the members of a population to be grouped in these cells, resulting in values of sensitivity and specificity that give a useful probability, without the distribution in the contingency table giving a statistically significant result for either the $\chi^2$ or Fisher exact tests.
VI. RESULTS
In Tables IV, V and VI statistically significant associations between the flawed identifiers and each of the source code quality measures are represented in black where $p < 0.001$ and dark grey where $p < 0.05$. Where the trend of association was negative, i.e. the presence of the particular identifier flaw is associated with better quality source code, the cell is marked with a white dash. White cells represent the lack of a statistically significant association (i.e. $p > 0.05$), and asterisks indicate where the particular identifier flaw was not found. The digits contained in selected cells show the probability with which the identifier flaw, when applied as a binary classifier, correctly predicts the quality of methods. Only probabilities of 0.55, marginally better than guessing, or greater, have been included in the tables. The probabilities not shown are largely close to 0.5, and only less than 0.5 for some of the negative associations.
Each table lists three further categories labelled ‘Extended 3’, ‘Extended 5’ and ‘Extended 10’. The results for the three ‘Extended’ flaws should be compared with those for the Non-Dictionary Words flaw to determine the influence of application-specific words and abbreviations on the relationship between the linguistic content of identifiers and FindBugs warnings. The bottom line of Table IV shows the relationships between methods classified as less-readable by the readability metric and FindBugs warnings. Where we found associations, our results largely confirm the connection between readability and FindBugs warnings found by Buse and Weimer [10]. Indeed, our results show that the connection between readability and FindBugs warnings extends to projects such as Ant and Freemind, which Buse and Weimer did not investigate. However, our work differs in the statistical methods used, the versions of projects investigated, and because we discriminated between priority one and two warnings, which they did not.
Table IV shows the associations between identifier flaws and FindBugs priority one and priority two warnings in the methods of each project. The statistical associations are largely confined to particular identifier flaws indicating the general cross-project trends. However, there are also apparent project-specific relationships as illustrated by Cactus and jEdit for both priority one warnings, and Cactus, Hibernate and JasperReports for priority two warnings.
While Cactus and jEdit have just one statistically significant association with priority one warnings between them, we found useful predictive qualities in the relationships for some identifier flaws. The probabilities given in the left hand side of Table IV emphasise the cross-project nature of the relationships between the Extended, Non-Dictionary Words, Number of Words and Short Identifier flaws. The relationships for the priority two warnings are less clear. There are hints of similar, general, cross-project relationships; however, the project-specific relationships are more apparent. Cactus, again, has few statistical associations, but some relationships have probabilities greater than 0.55. Hibernate and JasperReports both have negative statistical associations. Hibernate has a few relationships with probabilities greater than 0.55, whereas JasperReports has none.
The relationships for the Non-Dictionary Words flaw and priority two warnings are plotted in Figure 1. While six points are above the diagonal line and illustrate the utility of the Non-Dictionary Words flaw as a light-weight classifier, there are two points below the line. The point for Hibernate, where no statistically significant association was found, is closest to the line and the other is for JasperReports which has a negative association.

Tables V and VI show much more consistent relationships for identifier flaws with complexity, maintainability and readability. There remain, however, hints of project-specific relationships, which are most apparent for Cactus. The predictive probability associated with each relationship illustrates the utility of the identifier flaws as light-weight classifiers for source code quality. The relationships between the Non-Dictionary Words flaw and complexity and readability are plotted in Figure 1.
A. Threats to Validity
Construct Validity: The definition of the Short Identifier Name guideline is much more restrictive than the Java programming conventions [20], [21] and common practice. Consequently the number of identifiers categorised as flawed
may be inflated, and accordingly the observed associations may need to be treated with caution.
False positives are inevitable with static analysis tools such as FindBugs. The false positive rate for each application cannot be established without manual inspection of the source code in the proximity of each warning, which is outside the scope of the current study.
**External Validity:** The apparently project-specific influences on the relationships between flawed identifiers and FindBugs warnings in Table IV, suggest that, though general principles may be derived from our findings, caution is necessary when applying them to other projects. Some project-specific variation is apparent even in the more consistent findings shown in Tables V and VI, again suggesting that
Table VI
ASSOCIATIONS BETWEEN NAMING FLAWS AND READABILITY AND THE MAINTAINABILITY INDEX
<table>
<thead>
<tr>
<th></th>
<th>Less-Readable</th>
<th>Less-Maintainable</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Ant</td>
<td>Cactus</td>
</tr>
<tr>
<td>Capitalisation Anomaly</td>
<td>.62</td>
<td>.55</td>
</tr>
<tr>
<td>Excessive Words</td>
<td>.59</td>
<td>.58</td>
</tr>
<tr>
<td>External Underscores</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Long Identifier</td>
<td>.56</td>
<td>.58</td>
</tr>
<tr>
<td>Naming Convention Anomaly</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Number of Words</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Numeric Identifier</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Short Identifier Name</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Type Encoding</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Non-Dictionary Words</td>
<td>.65</td>
<td>.56</td>
</tr>
<tr>
<td>Extended 3</td>
<td>.62</td>
<td>.56</td>
</tr>
<tr>
<td>Extended 5</td>
<td>.64</td>
<td>.57</td>
</tr>
<tr>
<td>Extended 10</td>
<td>.65</td>
<td>.56</td>
</tr>
</tbody>
</table>
* p < 0.001
** p < 0.05
* No flaw
VII. DISCUSSION
The statistically significant associations found for FindBugs priority one and two warnings contain common features (Table IV). There appear to be general, cross-project associations for some identifier flaws, but the distribution of associations appears to be largely project specific. Cactus is the most extreme example with statistically significant associations found with the χ² and Fisher exact tests only between the extended dictionaries and priority two warnings. jEdit has only one statistically significant association with priority one warnings, but more with priority two. The negative associations in Table IV (marked with white dashes) emphasise the application-specific nature of some relationships. That the negative associations are positive for the more serious priority one warnings, suggests that the developers in both projects face more complex issues with identifiers than we can explain without further investigation.
The negative associations for the Excessive Words and Long Identifier flaws for JasperReports may be connected through the widespread use of longer identifier names, with which the development team have become familiar. The negative association for the Non-Dictionary Word flaw is not found with the lower frequency extended dictionaries and becomes a positive association with the ‘Extended 10’ flaw, indicating the importance of a widely used application-specific terms in JasperReports. The use of application-specific terms is consistent with the commercialised nature of JasperReports and the finding of Lawrie et al. [4] that domain-specific natural language and abbreviations are more common in identifiers found in commercial source code than in open source.
In previous work [6], conducted at the class level on the same projects, we found fewer relationships between identifier flaws and priority one warnings, and more general relationships with priority two warnings. At the method level a proportion of FindBugs warnings, which apply only to classes, are eliminated from the study. The finer-grained analysis could be the sole explanatory factor for the difference between the two sets of results for FindBugs warnings. However, it is possible that FindBugs warnings applicable at the class level alone, may have been a source of noise.
The evaluation of the predictive quality of each relationship offers further insights. Some relationships, despite the statistical independence of the two classifiers, may be applied as heuristics. The Non-Dictionary Word flaw for Cactus, for example, could be applied as reasonably reliable classifier of source code for FindBugs priority one warnings, with a probability of > 0.9. In general, the Non-Dictionary Words flaw is a fair to good classifier for FindBugs warnings; however, it is not perfect. The Number of Words and Short Identifiers flaws are much weaker classifiers, with probabilities largely between 0.55 and 0.60, but are still better than guessing.
Tables V and VI show largely consistent associations between the presence of identifier flaws and lower quality source code. In both cases the Capitalisation Anomaly and Non-Dictionary Words flaws provide the stronger classifiers. For complexity and maintainability the Excessive Words, Long Identifier Name, Number of Words, and Short Identifier Name flaws also perform better than chance. However,
only the Capitalisation Anomaly and Non-Dictionary Words flaws have consistent relationships with readability.
Identifier length is the only characteristic of individual identifiers that is a component of the readability metric. However, the readability metric developers found that identifier length was not a significant influence on the readability of source code [10]. Our findings, shown in the left hand side of Table VI, suggest the human subjects, against whose judgements of source code readability the metric was trained, were influenced by the conformance of identifier names to familiar typographical conventions, and the use of dictionary words and well-known abbreviations. Further, our findings suggest that longer identifiers do have a negative influence on readability, as evidenced by the statistical associations found for the Excessive Words and Long Identifier flaws in Table VI.
The ROC plots for the Non-Dictionary Words flaw in Figure 1 illustrate that the flaw may be applied to predict lower quality source code. Tables V and VI record probabilities generally greater than 0.6 and sometimes as high as 0.8, showing that the Non-Dictionary Words flaw provides a usable, light-weight classifier for the complexity, maintainability and readability of source code. The probabilities for other identifier flaws given in Tables V and VI show similar predictive values for identifying less-readable, less maintainable and more complex source code. However, the probabilities given in Table IV show that identifier flaws may not be reliably used to predict FindBugs warnings, because of the variation between projects. We previously reported [6] that the Cactus project requires the use of static style checking before code is committed to version control, which influences identifier quality. Also, the commercialised nature of the Hibernate and JasperReports projects may influence the composition of their identifiers [4]. It may be that there are relevant project or domain specific factors into which our current study cannot offer any insights. Boogerd and Moonen [16], [17] attributed many of the differences in their studies to ‘domain factors’. As we deliberately chose not to include projects from identical domains, our results cannot offer clear conclusions on this question.
VIII. Conclusions
The literature establishes the importance of identifier naming to program comprehension [2], [5]. However, there have been few investigations of the relationship between identifier name quality and source code quality [6], [16]. The contribution of this study is to provide a deeper understanding of this important but largely unexplored relationship.
Our investigation was conducted at a finer-granularity than previous work [6], using a variety of source code quality measures, to gain a richer perspective and discriminate among potentially confounding factors. We evaluated the quality of identifier names using accepted naming conventions validated by empirical study [13], and the natural language content of identifiers, including Java- and application-specific terms.
We evaluated source code quality using four perspectives: the identification of potentially problematic code with FindBugs, the three-metric maintainability index, a human-trained readability metric, and cyclomatic complexity. We used the \( \chi^2 \) and Fisher exact tests to test the independence of poor quality identifiers and more-complex, less-maintainable, and less-readable source code. We found, generally, that poor quality identifiers are associated with lower quality source code. To establish whether the observed associations might have a practical application, we applied a technique used in medicine to evaluate diagnostic tests. We found that some associations occurred with sufficient consistency that they could be applied in a practical setting to identify areas of source code as candidates for intelligent review. We also found that some relationships not found to be statistically significant with the \( \chi^2 \) and Fisher exact tests were potentially useful classifiers.
We investigated 8 open source Java applications using 10 identifier flaws, and the 3 extended dictionaries, and 6 indicators of source code quality. From our analysis of the 624 relationships, the following lessons for researchers and developers emerge:
- poor quality identifier names are strongly associated with more-complex, less-readable and less-maintainable source code;
- the use of natural language and recognised abbreviations in identifier names may be applied as a light-weight classifier for source code quality;
- the length of identifiers, both in terms of characters and number of component words, can be applied as a light-weight classifier for complexity and maintainability;
- poor quality identifier names are associated with FindBugs warnings; however, the relationships are complex and appear to be application-specific; and
- the only negative associations found were in commercialised projects, indicating there may be relevant differences between open source and commercial code.
Previous work has provided limited perspectives on the relationships between identifier naming, readability and source code quality. Identifiers formed a small part of Boogerd and Moonen’s [17] study of programming conventions and software quality. Buse and Weimer [10] found a relationship between source code readability and FindBugs conventions, and we [6] found associations between identifier quality and FindBugs warnings. This paper is the first to associate multiple naming and source code quality factors at a finer level of granularity. By working at the level of Java methods we were able to investigate the relationships in detail and to provide practical, light-weight and low-cost classifiers for identifying source code which is potentially less-maintainable, less-readable, more-complex and more fault-prone. Further work
is required to expand on our findings through the use of other source code quality metrics, including bug reports, the inclusion of semantic information in the measurement of identifier quality, and the investigation of commercial, closed source projects.
ACKNOWLEDGEMENTS
We thank Álvaro Faria, coordinator of the Statistics Advisory Service at The Open University, for his help in choosing the $\chi^2$ statistical method. We also thank Ray Buse and Westley Weimer of the University of Virginia for allowing us to use their readability metric tool. Finally, we thank the anonymous reviewers for their thoughtful comments, which have helped improve this paper.
REFERENCES
|
{"Source-Url": "http://oro.open.ac.uk/19224/1/butler10csmr.pdf", "len_cl100k_base": 9318, "olmocr-version": "0.1.51", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 37378, "total-output-tokens": 11831, "length": "2e13", "weborganizer": {"__label__adult": 0.0003616809844970703, "__label__art_design": 0.0002434253692626953, "__label__crime_law": 0.00033473968505859375, "__label__education_jobs": 0.000827789306640625, "__label__entertainment": 4.70280647277832e-05, "__label__fashion_beauty": 0.0001399517059326172, "__label__finance_business": 0.0001709461212158203, "__label__food_dining": 0.00028395652770996094, "__label__games": 0.0003790855407714844, "__label__hardware": 0.000598907470703125, "__label__health": 0.0004048347473144531, "__label__history": 0.00016617774963378906, "__label__home_hobbies": 7.736682891845703e-05, "__label__industrial": 0.0002334117889404297, "__label__literature": 0.00025200843811035156, "__label__politics": 0.00018262863159179688, "__label__religion": 0.0003635883331298828, "__label__science_tech": 0.005596160888671875, "__label__social_life": 9.79900360107422e-05, "__label__software": 0.004009246826171875, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002636909484863281, "__label__transportation": 0.0003859996795654297, "__label__travel": 0.00017118453979492188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54071, 0.03323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54071, 0.55724]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54071, 0.90872]], "google_gemma-3-12b-it_contains_pii": [[0, 634, false], [634, 5972, null], [5972, 12022, null], [12022, 18135, null], [18135, 23677, null], [23677, 29811, null], [29811, 34826, null], [34826, 35606, null], [35606, 43158, null], [43158, 49109, null], [49109, 54071, null]], "google_gemma-3-12b-it_is_public_document": [[0, 634, true], [634, 5972, null], [5972, 12022, null], [12022, 18135, null], [18135, 23677, null], [23677, 29811, null], [29811, 34826, null], [34826, 35606, null], [35606, 43158, null], [43158, 49109, null], [49109, 54071, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54071, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54071, null]], "pdf_page_numbers": [[0, 634, 1], [634, 5972, 2], [5972, 12022, 3], [12022, 18135, 4], [18135, 23677, 5], [23677, 29811, 6], [29811, 34826, 7], [34826, 35606, 8], [35606, 43158, 9], [43158, 49109, 10], [49109, 54071, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54071, 0.23889]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
3821e7d338bf542e191c9a632270f8ccecf9104e
|
[REMOVED]
|
{"Source-Url": "https://iris.unito.it/retrieve/handle/2318/1577554/175738/2016%20Barbanera%20dL%20-%20A%20game%20interpretation%20of%20retractable%20contracts%20%28preprint%29.pdf", "len_cl100k_base": 14221, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 73016, "total-output-tokens": 16575, "length": "2e13", "weborganizer": {"__label__adult": 0.0009255409240722656, "__label__art_design": 0.0012264251708984375, "__label__crime_law": 0.0012912750244140625, "__label__education_jobs": 0.0029811859130859375, "__label__entertainment": 0.0005068778991699219, "__label__fashion_beauty": 0.00045680999755859375, "__label__finance_business": 0.0016307830810546875, "__label__food_dining": 0.0013761520385742188, "__label__games": 0.034912109375, "__label__hardware": 0.001915931701660156, "__label__health": 0.00168609619140625, "__label__history": 0.0011491775512695312, "__label__home_hobbies": 0.00026869773864746094, "__label__industrial": 0.0011386871337890625, "__label__literature": 0.0022449493408203125, "__label__politics": 0.0009236335754394532, "__label__religion": 0.0011653900146484375, "__label__science_tech": 0.2386474609375, "__label__social_life": 0.00021851062774658203, "__label__software": 0.01177978515625, "__label__software_dev": 0.6904296875, "__label__sports_fitness": 0.0011501312255859375, "__label__transportation": 0.0012140274047851562, "__label__travel": 0.0005059242248535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51448, 0.01766]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51448, 0.29154]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51448, 0.78384]], "google_gemma-3-12b-it_contains_pii": [[0, 833, false], [833, 3711, null], [3711, 7012, null], [7012, 10006, null], [10006, 13668, null], [13668, 16978, null], [16978, 20078, null], [20078, 23800, null], [23800, 27381, null], [27381, 30864, null], [30864, 33613, null], [33613, 37660, null], [37660, 40976, null], [40976, 43543, null], [43543, 47185, null], [47185, 48682, null], [48682, 51448, null]], "google_gemma-3-12b-it_is_public_document": [[0, 833, true], [833, 3711, null], [3711, 7012, null], [7012, 10006, null], [10006, 13668, null], [13668, 16978, null], [16978, 20078, null], [20078, 23800, null], [23800, 27381, null], [27381, 30864, null], [30864, 33613, null], [33613, 37660, null], [37660, 40976, null], [40976, 43543, null], [43543, 47185, null], [47185, 48682, null], [48682, 51448, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51448, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51448, null]], "pdf_page_numbers": [[0, 833, 1], [833, 3711, 2], [3711, 7012, 3], [7012, 10006, 4], [10006, 13668, 5], [13668, 16978, 6], [16978, 20078, 7], [20078, 23800, 8], [23800, 27381, 9], [27381, 30864, 10], [30864, 33613, 11], [33613, 37660, 12], [37660, 40976, 13], [40976, 43543, 14], [43543, 47185, 15], [47185, 48682, 16], [48682, 51448, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51448, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
0e8f3a183c03db76546b57b76ef1b55f84adac69
|
Partial Models: Towards Modeling and Reasoning with Uncertainty
Michalis Famelis, Rick Salay and Marsha Chechik
University of Toronto, Canada
{famelis,rsalay,chechik}@cs.toronto.edu
Abstract—Models are good at expressing information about software but not as good at expressing modelers’ uncertainty about it. The highly incremental and iterative nature of software development nonetheless requires the ability to express uncertainty and reason with models containing it. In this paper, we build on our earlier work on expressing uncertainty using partial models, by elaborating an approach to reasoning with such models. We evaluate our approach by experimentally comparing it to traditional strategies for dealing with uncertainty as well as by conducting a case study using open source software. We conclude that we are able to reap the benefits of well-managed uncertainty while incurring minimal additional cost.
I. INTRODUCTION
Software engineering is a highly incremental and iterative endeavor where uncertainty can exist at multiple stages of the development process. Consequently, systematic approaches to handling uncertainty are essential throughout the software life-cycle.
Models are used pervasively in software engineering, and their ability to express information about different aspects of software has been studied by many researchers [25]. However, models seldom provide the means for expressing the uncertainty that the modeler has about this information. In this paper, by “uncertainty” we mean “multiple possibilities”. This notion of uncertainty is often used in behavioral modeling [10], but we expand it to arbitrary modeling languages. For example, a modeler of a class diagram may be uncertain about which of two attributes to include in a particular class because they represent different design strategies, and it is too early to know which is correct.
In general, uncertainty can be introduced into the modeling process in many ways: alternative ways to fix model inconsistencies [14], [5], [24], different design alternatives (e.g., the above example) [26], problem-domain uncertainties [27], multiple stakeholder opinions [18], etc. In each case, the presence of uncertainty means that, rather than having a single model, we actually have a set of possible models and we are not sure which is the correct one. Living with uncertainty requires us to keep track of this set and use it within modeling activities wherever we would use an individual model; however, this can be challenging since modeling activities are typically intended for individual models, not sets of them. Furthermore, managing a set of models explicitly is impractical since its size might be quite large. For example, in Sec. VI we give a case study in which two inconsistencies lead to several hundred possible models. On the other hand, if uncertainty is ignored and one particular possible model is chosen prematurely, we risk having incorrect information in the model.
Motivating Example. To help motivate and explain our approach, we use the example of a team engaged in the development of a simple peer-to-peer file sharing application. The team uses UML State Machine diagrams to model the behavior of this application. Its states are Idle, Leeching (downloading a file) and Seeding (sharing a complete local copy). Downloading always starts from the Idle state, and Seeding and Leeching can always be canceled. We assume that at this stage of development, the team has not finalized the exact behavior of the program, due to vague requirements.
Figure 1. (a-f) Six alternative designs for a peer-to-peer file sharing system; (g) a partial model $M_e$ for the six alternatives.
given to them by their client. The team has drafted three alternative behavioral designs:
1) “Benevolent”: Once the file is downloaded, the program automatically starts Seeding, as shown in Fig. 1(a).
2) “Selfish”: Once the file is downloaded, the program becomes Idle, and the user can choose whether to start Seeding or not – see Fig. 1(c).
3) “Compromise”: Once the file is downloaded, the program stops accepting new peers. It doesn’t disconnect from peers that were already connected during the Leeching stage, but rather waits while they are finishing before it becomes Idle – see Fig. 1(e).
The team is also unsure whether the program should allow the user to restart a finished download (i.e., download the file again). The three alternatives with this feature are shown in Fig. 1(b, d, e), respectively.
Until the client clarifies the requirements, the team is faced with uncertainty over which design decision to choose. At this point, it is probably useful to be able to reason about the available choices, both to ensure that the models conform to the desired constraints and to explore their properties. For example, assume that the team is using a code generator that, in order to ensure determinism, requires two hard constraints: HC1: No two transitions have the same source and target. HC2: No state is a sink.
Additionally, the team is interested in two “nice-to-have” properties, i.e., soft constraints that are not strictly required but are desirable:
SC1: Users can share files they already have, (i.e., Seeding is directly reachable from Idle).
SC2: Users can always cancel any operation (i.e., every non-idle state has a transition to Idle on cancel()).
In order to reason effectively about any of these properties over the entire set of alternatives, the team may want to ask the following questions:
Does the property hold for all, some or none of the alternatives? This can help determine how critical some property is in selecting alternatives when uncertainty is lifted. For example, HC2 holds for all alternatives, and therefore is not going to be a main reason in selecting one, once uncertainty is resolved. Moreover, if some property does not hold for any alternative, it may be an indication that the team needs to revisit the designs, sooner rather than later. For example, knowing early on that HC1 does not hold for the alternatives in Fig. 1(c, d) may be an indication that the team needs to reconsider the design of the “selfish” scenario.
If the property does not hold for all alternatives, why is it so? This form of diagnosis can help guide development decisions even before uncertainty is lifted. Developers may be interested in finding one counter-example of an alternative where the property gets violated (or if they expected that the property would be violated – an example where it holds) to help them debug the set of alternatives. For example, locating the alternative in Fig. 1(e) might be sufficient for the team to understand why SC2 does not hold for all alternatives. In other cases, we may prefer to calculate the entire subset of alternatives that violate the property, to explore whether there is a common underlying cause, as with the alternatives in Fig. 1(c, d) that violate the hard constraint HC1.
If the property is a necessary constraint, how to filter out the alternatives for which it gets violated? Developers may be interested in this sort of property-driven refinement of the set of alternatives. For example, if the team decides that SC2 is a necessary feature, they should be able to restrict their working set of alternatives to those that satisfy it, namely, those in Fig. 1(a-d).
Contributions. In this paper, we elaborate and evaluate a key component of our broad research agenda for managing uncertainty within models [7], reasoning with models containing uncertainty. Specifically, we define partial models, show how to construct them and then describe the three reasoning operators aimed to answer the questions posed by the motivating example. We then extensively evaluate our approach by experimentally comparing it to conventional strategies for dealing with uncertainty as well as by conducting a case study of an open source software project.
Organization of the paper. The rest of this paper is organized as follows: In Sec. II, we provide the necessary background. In Sec. III, we formally define partial models. Sec. IV develops the core methods of reasoning with partial models. These are experimentally evaluated in Sec. V and then applied to a case study in Sec. VI. We discuss related work in Sec. VII and conclude the paper with a summary and suggestions for further research in Sec. VIII.
II. BACKGROUND
In this section, we establish the notation and introduce concepts used in the remainder of the paper. Specifically, we ground our approach to graph-based modeling languages and propositional logic.
Modeling Formalisms. In this paper, a model is a typed graph that conforms to some metamodel represented by a distinguished type graph. Our approach is domain-independent, in the sense that it can handle arbitrary graph-based modeling languages. The definitions that follow are based on [6].
Definition 1: A graph is a tuple \( G = (V, E, s,t) \), where \( V \) is a set of nodes, \( E \) is a set of edges, and \( s,t : E \rightarrow V \) are the source and target functions, respectively, that assign each edge a source and target node.
Definition 2: A typed graph (model) of type \( T \) is a triple \( (G, type, T) \) consisting of a graph \( G \), a metamodel \( T \) and a typing function \( type : G \rightarrow T \) that assigns types to the elements of \( G \).
For example, the models shown in Fig. 1 are typed with the metamodel shown in Fig. 2.
Definition 3: The scope (or vocabulary) of a model \( G = (V, E, s,t), type, T) \) is the set \( S = V \cup E \) of its typed nodes and edges.
For example, the scope of the model in Fig. 1(a) consists of the states Idle, Leeching and Seeding and the edges start(), completed(), etc.
In the following, we often refer to nodes and edges that are in the scope of a model as elements or atoms of the model.
**From models to formulas and back.** To encode a model in propositional logic, we first map elements in its scope into propositional variables and then conjoin them. To ensure that this operation is reversible, we define specific naming conventions for the propositional variables:
- A node element \( N \) of type \( T \) is mapped to a propositional variable \( "N_T" \).
- An edge element \( E \) of type \( T \) with source node \( N_1 \) and target node \( N_2 \) is mapped to a propositional variable \( "E_{N_1,N_2,T}" \).
For example, the propositional encoding of the model in Fig. 1(b) is:
\[
\text{Idle}_\text{State} \land \text{Leeching}_\text{State} \land \text{Seeding}_\text{State} \land \\
\text{start}_\text{Idle}_\text{Leeching}_\text{Transition} \land \\
\text{cancel}_\text{Leeching}_\text{Idle}_\text{Transition} \land \\
\text{completed}_\text{Leeching}_\text{Seeding}_\text{Transition} \land \\
\text{cancel}_\text{Seeding}_\text{Idle}_\text{Transition} \land \\
\text{restart}_\text{Seeding}_\text{Leeching}_\text{Transition}
\]
Given a propositional encoding \( P(m) \) of a model \( m \), we can uniquely reconstruct the model \( m \) using the naming conventions. First, for every propositional whose name fits the pattern \( \text{N}_T \), we create a node of type \( T \), named \( N \). Then, for every propositional variable whose name follows the pattern \( \text{E}_{N_1,N_2,T} \), we create an edge of type \( T \) between the nodes \( N_1 \) and \( N_2 \), with the label \( E \).
This propositional encoding also allows us to embed models into larger scopes, by negating all the variables not in the original scope. For example, the model in Fig. 1(a) can be expressed in the scope of the model in Fig. 1(b) as:
\[
\text{Idle}_\text{State} \land \text{Leeching}_\text{State} \land \text{Seeding}_\text{State} \land \\
\text{start}_\text{Idle}_\text{Leeching}_\text{Transition} \land \\
\text{cancel}_\text{Leeching}_\text{Idle}_\text{Transition} \land \\
\text{completed}_\text{Leeching}_\text{Seeding}_\text{Transition} \land \\
\text{cancel}_\text{Seeding}_\text{Idle}_\text{Transition} \land \\
\neg \text{restart}_\text{Seeding}_\text{Leeching}_\text{Transition}
\]
Using the propositional representation, we also define a simple form of model union. Assuming two elements with the same name are considered identical, the union of two models is a model that corresponds to a formula that is a conjunction of all the variables in the union of their scopes. For example, the union of the models in Fig. 1(a, b) is:
\[
\text{Idle}_\text{State} \land \text{Leeching}_\text{State} \land \text{Seeding}_\text{State} \land \\
\text{start}_\text{Idle}_\text{Leeching}_\text{Transition} \land \\
\text{cancel}_\text{Leeching}_\text{Idle}_\text{Transition} \land \\
\text{completed}_\text{Leeching}_\text{Seeding}_\text{Transition} \land \\
\text{cancel}_\text{Seeding}_\text{Idle}_\text{Transition} \land \\
\text{restart}_\text{Seeding}_\text{Leeching}_\text{Transition} \land \\
\text{share}_\text{Idle}_\text{Seeding}_\text{Transition}
\]
A useful extended scope is the embedding of a sparse graph into the scope of its corresponding complete graph. In the union of models with extended scopes, variables only appear negated if they are negated in both input models.
**Properties.** We consider properties expressed in first order logic (FOL) or in a similar language such as the Object Constraint Language (OCL) [15]. For example, the property HC1 is expressed in FOL as:
\[
\forall t_1, t_2 : \text{Transition} \quad \text{Source}(t_1) = \text{Source}(t_2) \land \\
\text{Target}(t_1) = \text{Target}(t_2) \iff (t_1 = t_2)
\]
An FOL formula can be grounded over the vocabulary of a particular model that is encoded in a propositional formula. For example, grounding HC1 over the vocabulary of the model in Fig. 1(a), given that it contains 4 transition elements and that HC1 is a universal property, results in \( \Phi_{HC1} \) a conjunction of 10 unique terms of the form \( (S_i = S_j \land T_i = T_j) \iff E_i = E_j \), where \( E_{i,j} \) are propositional variables representing transitions, \( S_{i,j}, T_{i,j} \) are variables representing their respective source and target states, and \( = \) signifies identity.
### III. Partial Model Preliminaries
In this section, we formally define partial models and their associated operations. Semantically, a partial model represents a set of classical (i.e., non-partial) models.
**Partial Models.** The particular type of partiality we consider in this paper is the one that allows a modeler to express uncertainty as to whether particular model atoms should be present the model. The model is accompanied by a propositional formula, called *may formula*, which explicates allowable combinations of such atoms.
**Definition 4:** A Partial Model is a tuple \( (G, vm, em, \phi) \), where \( G = \langle V, E, s, t, type \rangle \) is a complete typed graph, \( vm : V \to B \) and \( em : E \to B \), where \( B \) is the set \{True, False, Maybe\}, are functions for annotating atoms in \( G \), and \( \phi \) is a propositional *may formula* over the scope \( S = V \cup E \), built as described in Sec. II.
In the above definition, an annotation True (False) means that the atom must (must not) be present in the model, whereas Maybe indicates uncertainty about whether the atom should be present in the model. In other words, a partial model consists of a complete typed graph whose elements are annotated with True, False or Maybe, and a may formula that describes the allowed configurations of its elements. The annotation functions are often omitted for brevity.
Model \( M_e \) in Fig. 1(g) is an example of a partial model. The elements annotated with True, such as the state Idle and the transition start(), appear with solid lines, and its Maybe elements, such as the state Finishing, with dashed lines. The edges that are not shown (such as any edge between the states Finishing and Leeching) are annotated with False. \( M_e \) is accompanied by the may formula \( \phi_e \), shown next to it in the figure. We have used capital letters as shortcuts for the full names of the propositional variables that correspond to the Maybe elements. For example, \( F \) stands for the variable Finishing State.
Given a partial model \( M \), let \( C(M) \) be the set of classical (or concrete) models that it represents, called concretizations. For example, \( C(M_e) \) consists of the models shown in Fig. 1(a)-(f). A partial model with an empty set of concretizations is called *inconsistent*. In what follows, we only assume consistent partial models.
The size of the set of concretizations reflects the modeler’s degree of uncertainty. Uncertainty can be reduced by reducing the set of concretizations via refinement. A partial model is refined by changing the annotations of its elements to increase the level of certainty: Maybe elements can be assigned to True, False or Maybe; True and False annotations must remain unchanged since information about them is already certain. Changes to Maybe elements must not violate the formula of the original partial model, and thus produce a (nonempty) subset of concretizations allowed by it.
Definition 5: Given two partial models $M_1$ and $M_2$, where $M_i = \langle G_i, \beta_i, \gamma_i, \phi_i \rangle$, with $G_1 = G_2$, we say that $M_2$ refines $M_1$ (or that $M_1$ is more abstract than $M_2$), denoted $M_2 \preceq M_1$ iff $C(M_2) \subseteq C(M_1)$ over the same scope $S$.
For example, the model $M^{HC_1}$ in Fig. 4(b) is more refined than the model $M_1$ in Fig. 1(g). In particular, $C(M_1)$ consists of the models in Fig. 1(a)-(f), whereas $C(M^{HC_1})$ consists of the models in Fig. 1(a,b,e,f). Thus, the model $M^{HC_1}$ has less uncertainty.
A partial model without Maybe elements has exactly one concretization. The naming conventions in Sec. II allow us to define a unique conversion between a classical model and a corresponding partial model with a unique concretization.
Normal Forms. Given a set of classical models, there is no unique way to represent them as a partial model. For example, $M^{HC_1}$ in Fig. 4(a) represents the models in Fig. 1(c, d). However, the same set of concretizations could be expressed by: (a) removing from the scope of $M^{HC_1}$ extraneous False elements, such as the state Finishing, and (b) rewriting its propositional formula only in terms of its Maybe elements. In the case of $M^{HC_1}$, the partial model has only one Maybe element (the transition on restart()), which can be either True or False, and therefore, the attached propositional formula $\phi^{HC_1}$ is a tautology.
Definition 6: Two partial models $M_1$, $M_2$ are equivalent, denoted $M_1 \sim M_2$, iff $C(M_1) = C(M_2)$. Obviously, $M_1 \sim M_2$ iff $M_1 \preceq M_2$ and $M_2 \preceq M_1$.
To help represent models, we define two normal forms: Graphical Normal Form (GNF) and Propositional Normal Form (PNF). Intuitively, a model in GNF represents most information in the graph, whereas in PNF it represents all the information in the formula. For example, the GNF and PNF for $M^{HC_1}$ are shown in Fig. 3(a-b), respectively. In the latter, we did not represent False edges which would otherwise be represented by negated variables.
As partial models are complete graphs, the normal form of $M$ should be restricted to its largest complete subgraph that only contains True and Maybe nodes. We call the scope of this subgraph minimal.
In the following, the symbol $\models$ signifies logical entailment.
Definition 7: Given a partial model $M = \langle G, \phi \rangle$, its GNF is a partial model $M^{\text{GNF}} = \langle G^{\text{GNF}}, \phi^{\text{GNF}} \rangle$, constructed as follows:
- $G^{\text{GNF}} \subseteq G$ and the scope $S$ of $M^{\text{GNF}}$ is minimal.
- For every atom $a$ in $G$, if $\phi \models a$, then $a$ is annotated with True in $G^{\text{GNF}}$.
- For every atom $a$ in $G$, if $\phi \models \neg a$, then $a$ is annotated with False in $G^{\text{GNF}}$.
- $\phi^{\text{GNF}}$ is specified only in terms of elements annotated with Maybe in $G^{\text{GNF}}$.
- $\phi \models \phi^{\text{GNF}}$.
Proposition 1: Let $M$ be a partial model and $M^{\text{GNF}}$ be a result of applying Definition 7. Then, $M \sim M^{\text{GNF}}$.
Definition 8: Given a partial model $M = \langle G, \phi \rangle$, its PNF is a partial model $M^{\text{PNF}} = \langle G^{\text{PNF}}, \phi^{\text{PNF}} \rangle$ constructed as follows:
- $G^{\text{PNF}} \subseteq G$ and the scope $S$ of $M^{\text{PNF}}$ is minimal.
- All elements in $G^{\text{PNF}}$ are annotated with Maybe.
- $\phi^{\text{PNF}} \models \phi$.
Proposition 2: Let $M$ be a partial model and $M^{\text{PNF}}$ be a result of applying Definition 8. Then, $M \sim M^{\text{PNF}}$.
Properties of Partial Models. The result of checking a property on a partial model can be True, False or Maybe. True means that the property holds for all concretizations, False that it does not hold for any of them, and Maybe that it holds for some, but not all concretizations. This is called thorough checking [2]. Moreover, by Definition 5, refinement preserves True and False properties. That is, as uncertainty gets reduced, values of properties about which we were certain remain unaffected.
IV. REASONING WITH PARTIAL MODELS
In this section, we describe how to facilitate decision deferral in the presence of uncertainty by using partial models to reason with sets of alternatives. In particular, we define four reasoning operations:
OP1: Construction: how to create a partial model to (precisely) represent a set of alternatives.
OP2: Verification: how to check whether a partial model satisfies a property.
OP3: Diagnosis: how to find out which alternatives violate the property.
OP4: Refinement: how to filter out the alternatives that violate the property.
OP1: Construction. Construction of partial models is achieved by merging the alternatives and annotating the elements that vary between them by Maybe. Additionally, may
We then check satisfiability of the expressions \(\Phi\) that is represented in PNF by the propositional formula \(G\). The model entails that of the property.
SAT solver is then used to check whether the encoding of the PNF and appropriately combine its PNF may formula with answer the question "Does the desired property hold?". OP2: Verification. The purpose of the verification task is to answer the question "Does the desired property hold?".
In order to facilitate reasoning, we put the partial model in PNF and appropriately combine its PNF may formula with the formula representing the property we want to check. A SAT solver is then used to check whether the encoding of the model entails that of the property.
Specifically, the verification engine receives a partial model \(M\) that is represented in PNF by the propositional formula \(\Phi\) and a property expressed as a propositional formula \(\Phi_p\). We then check satisfiability of the expressions \(\Phi_M \land \neg \Phi_p\) and \(\Phi_M \land \Phi_p\) using two queries to a SAT solver, combining the results to determine the outcome of the property on the partial model as described in Table I. For example, if both the property and its negation are satisfiable, then there is at least one concretization of the partial model where the property holds and another -- where it does not. Thus, in the partial model the property has value Maybe.
Returning to our running example, in order to check whether the property HC1 holds for the partial model \(M_e\) in Fig. 1(g), we first put \(M_e\) in PNF to get the propositional formula \(\Phi_e\). Then we express HC1 as a propositional formula \(\Phi_{HC1}\), by grounding it over the vocabulary of \(M_e\), as described in Sec. II. Checking the property means checking satisfiability \(\Phi_e \land \neg \Phi_{HC1}\) and \(\Phi_e \land \Phi_{HC1}\). The SAT solver returns one of the two models from Fig. 1(c, d) as the satisfying assignment for \(\Phi \land \neg \Phi_{HC1}\), and one of those in Fig. 1(a, b, e, f) for \(\Phi \land \Phi_{HC1}\). Thus, the value of HC1 is
Algorithm 1 Construction of partial models.
**Input:** Set \(A\) of \(n\) concrete models \(m_i\), \(i \in [0..n-1]\).
**Output:** A partial model \(M = \langle G_M, \Phi_M \rangle\).
1. Construct \(G_M\) as the union of all \(m_i \in A\).
2. Annotate non-common elements in \(G_M\) by Maybe.
3. Create \(\Phi_M := \text{False}\)
4. for all \(m_i \in A, \ e_x \in G_M\ e\ annotated with Maybe\) do
5. Create \(\phi_i = e_0 \land \neg e_1 \land \ldots \land e_k\), where if \(e_x \notin m_i\), it appears negated.
6. \(\Phi_M := \Phi_M \lor \phi_i\)
7. end for
8. return \(M = \langle G_M, \Phi_M \rangle\)
A formula is constructed to capture the allowable configurations of the Maybe elements.
Algorithm 1 shows how to create a partial model \(M\) from a set \(A\) of alternatives. By construction, \(C(M) = A\), which establishes the algorithm’s correctness.
In our motivating example, the six alternative behavioral designs can be represented using the partial model shown in Fig. 1(g). In the figure, elements annotated as Maybe appear dashed. For example, the state Finishing exists in only two alternatives, and the transition on restart() – in three; therefore, both are represented as Maybe. The rest of the elements are present in all of the alternatives, and thus are represented as True and appear solid. The corresponding may formula is shown in Fig. 1(g).
**OP2:** Verification. The purpose of the verification task is to answer the question "Does the desired property hold?".
In order to facilitate reasoning, we put the partial model in PNF and appropriately combine its PNF may formula with the formula representing the property we want to check. A SAT solver is then used to check whether the encoding of the model entails that of the property.
Specifically, the verification engine receives a partial model \(M\) that is represented in PNF by the propositional formula \(\Phi\) and a property expressed as a propositional formula \(\Phi_p\). We then check satisfiability of the expressions \(\Phi_M \land \neg \Phi_p\) and \(\Phi_M \land \Phi_p\), using two queries to a SAT solver, combining the results to determine the outcome of the property on the partial model as described in Table I. For example, if both the property and its negation are satisfiable, then there is at least one concretization of the partial model where the property holds and another -- where it does not. Thus, in the partial model the property has value Maybe.
Table I: Checking property \(p\) on the partial model \(M\).
<table>
<thead>
<tr>
<th>(\Phi_M \land \Phi_p)</th>
<th>(\Phi_M \land \neg \Phi_p)</th>
<th>Property (p)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SAT</td>
<td>SAT</td>
<td>Maybe</td>
</tr>
<tr>
<td>SAT</td>
<td>UNSAT</td>
<td>True</td>
</tr>
<tr>
<td>UNSAT</td>
<td>SAT</td>
<td>False</td>
</tr>
<tr>
<td>UNSAT</td>
<td>UNSAT</td>
<td>(Inconsistent M)</td>
</tr>
</tbody>
</table>
OP3: Diagnosis. If the result of the verification task is False or Maybe, the next step is to do diagnosis, i.e., to answer the question "Why does the property of interest not hold?". Or, conversely, if the outcome was Maybe where it was expected to be False, to answer the question "Why is the property not violated?". There are three forms of feedback that can be given to the developer:
1) Return one counter-example – a particular concretization for which the property does not hold (OP3a): Such a counter-example is provided “for free” as a by-product of SAT-based verification. In particular, if the property is False, the SAT solver produces a satisfying assignment for \(\Phi_M \land \neg \Phi_p\).
This assignment is a valuation for all propositional variables that correspond to elements in the scope of \(M\) and can thus be visualized as a classical model for presentation to the user. To create the visualization, we conjoin all variables, negating those that had value False in the satisfying assignment. Provided the naming conventions in Sec. II are followed, this conjunction uniquely corresponds to a classical model, which is then presented as the feedback.
In our running example, verifying SC2 on the model \(M_e\) involves checking the satisfiability of \(\Phi_e \land \neg \Phi_{SC2}\). This formula is satisfiable, and the SAT solver returns one of the concretizations in Fig. 1(e, f) as a satisfying assignment.
2) Return a concretization where the property does hold (OP3b): This is also a by-product of the verification stage: if the result of checking the property is Maybe, the SAT solver produces a satisfying assignment for the formula \(\Phi_M \land \Phi_p\). This valuation is expressed as a model (as discussed above) and provided to the user.
In the case of verifying SC2, the SAT solver returns a valuation that corresponds to one of the concretizations in Fig. 1(a, b, c, d) as a satisfying assignment to the formula \(\Phi_e \land \Phi_{SC2}\).
3) Return a partial model representing the set of all concretizations for which the property does not hold (OP3c): These concretizations are characterized by the formula \(\Phi_M \land \neg \Phi_p\). In our example, the concretizations of \(M_e\) that violate HC1 are those that satisfy the formula \(\Phi_e \land \neg \Phi_{HC1}\), i.e., those in Fig. 1(c, d).
In order to create useful feedback to the user, we consider a new partial model \(M^{p}\) with the same vocabulary as \(M\), that is represented in PNF by the formula \(\Phi_M \land \neg \Phi_p\). We visualize \(M^{p}\) by putting it into GNF. In our example, the partial model \(M^{p}_{HC1}\) that represents the set of concretizations of \(M_e\) that violate HC1 is shown in Fig. 4(a). \(M^{p}_{HC1}\) is expressed in terms of the larger scope of \(M_e\) and therefore certain elements are tagged as False and omitted from the diagram. The overall process is described in Algorithm 2. As the resulting
A partial model $M\subset HC1$ consists of those in Fig. 1(a, b, e, f). Constructing the partial model $M\Phi$ that violates HC1. (b) Partial model $M\subset HC1$ for which the property was violated. In other words, $M=D\Phi$. $M\Phi$ formula representing all concretizations of $M\subset HC1$. $M\Phi$ abstracting exactly the subset of concretizations of the original partial model for which the property was violated.
**OP4: Property-driven refinement.** If the result of verification of an important property is Maybe, the developer may want to refine the partial model to a constrained version such that all of its concretizations satisfy the property. This subset of concretizations is exactly the subset of concretizations of the original partial model for which the property holds; thus, $M\Phi=\Phi$.
In our example, the set of concretizations of $M\Phi$ that satisfy HC1 consists of those in Fig. 1(a, b, e, f). Constructing the partial model $M\Phi$ that represents these is done using the same method (shown in Algorithm 2) as for constructing its complement, $M\Phi\subset HC1$. Namely, the formula $M\Phi\subset HC1$ is constructed and then put into GNF. The result is shown in Fig. 4(b).
As $M\Phi$ is constructed using the formula $M\Phi\subset HC1$, its set of concretizations is exactly the subset of concretizations of the original $M$ for which the property holds; thus, $M\Phi\subset HC1$.
V. EXPERIMENTS
We conducted a preliminary empirical study to assess the feasibility and scalability of our approach to reasoning using partial models. More specifically, we attempted to answer the following research questions:
**RQ1**: How feasible is reasoning with sets of models with the partial model representation in comparison to the classical approach?
**RQ2**: How sensitive are the partial modeling representation and reasoning techniques to the varying degree of uncertainty?
To get answers to RQ1 and RQ2, we set up experiments with parameterized random inputs to simulate various categories of realistic reasoning settings.
**Experimental setup.** The reasoning tasks described in Sec. IV are operationalized using two fundamental tasks:
**Algorithm 2** Get all concretizations that violate (satisfy) a property.
**Input:** A partial model $M_{in}$ and a property $C$
**Output:** A partial model $M_{out}$ abstracting exactly the concretizations of $M_{in}$ that violate (satisfy) $C$.
1. Put $M_{in}$ in PNF, to get $\Phi_{in}$.
2. Ground $C$, to get $\Phi_e$.
3. Construct $\Phi_{out} := \Phi_{in} \land \neg \Phi_e$.
4. Create $M_{out}$ with the same vocabulary as $M_{in}$ and PNF formula $\Phi_{out}$
5. Put $M_{out}$ in GNF and return it
**T1:** Check the satisfiability of the formulas $\Phi_M \land \Phi_P$ and $\Phi_M \land \neg \Phi_P$ (for OP2, OP3a and OP3b).
**T2:** Construct a new partial model in GNF that has a PNF formula $\Phi_M \land \Phi_P$ (for OP3c with $\neg \Phi_P$ and OP4).
We focus our experimental evaluation on T1 and T2 because they require the use of SAT-solving technology, as opposed to Construction (OP1) which is linear to the number of input classical models and their elements (see Algorithm 1). Specifically, to answer RQ1, we conducted two experiments:
E1 Compare the relative performance of doing reasoning by running the task T1 to the performance of classical reasoning by considering the set of concretizations represented by $M$.
E2 Compare the relative performance of running T2 to get a partial model representing the subset of concretizations that satisfy a property, to the performance of incrementally collecting all the classical models as satisfying assignments of the formula $\Phi_M \land \Phi_P$.
To answer RQ2, we executed the experiments E1 and E2 with randomly generated experimental inputs that were parameterized to allow for different sizes, both with respect to model size and the size of the set of concretizations.
**Experimental inputs.** The metamodel of typed models corresponds to additional constraints in their propositional encoding. This makes the problem easier for the SAT solver, as it constrains the search space. We chose to use untyped models for inputs to our experiments, as the least constrained and thus the most difficult for the SAT solver.
We considered the following experimental parameters:
1) size of the partial model, 2) size of its set of concretizations, 3) quantification (e.g., existential, universal, mixed) of the property, and 4) result of property checking (True, False, Maybe). To manage the multitude of possible combinations of these, we discretized the domain of each parameter into several categories.
We defined four size categories, based on the total number of elements (nodes and edges) in the partial model: Small (S), Medium (M), Large (L) and Extra-Large (XL). Based on pilot experiments, we defined ranges of reasonable values for each size category and selected a representative exemplar. The ranges of the categories and the selected exemplars for each category are shown in Table II.
In a similar manner, we defined four categories (S, M, L, XL) for the size of the set of concretizations of the generated model. The size of this set reflects the degree of uncertainty encoded in the partial model, so that the category S corresponds to little uncertainty over which alternative to chose, and the category XL corresponds to extreme uncertainty. Based on pilot experiments, we defined reasonable ranges and selected a representative exemplar for each category, as shown in Table III.
We also defined four property types (based on the quantification of FOL formulas): “fully existential” (E), “fully universal” (A) and two “mixed” categories: “exists-forall” (EA) and “forall-exists” (AE). Additionally, we considered the three possible results that can be yielded by property checking – True, Maybe and False.
**Implementation.** We implemented tooling support to randomly generate inputs based on the experimental properties outlined in Sec. V. Specifically, we generate propositional formulas expressed in the input format of the MathSAT 4 SMT Solver [3]. Each such propositional formula $\Phi_r$ is a conjunction of the form $\Phi_r = \Phi_a \land \Phi_e \land \Phi_p$, where $\Phi_a$ represents the annotations of the elements of the partial model, $\Phi_e$ – its set of concretizations and $\Phi_p$ – the property being checked. We describe these below.
For each random partial model, we considered a complete graph whose elements are in the model’s finite vocabulary of $N_1$ nodes and $N_1^2$ edges. Each element is randomly annotated as True or False, and $N_2$ elements are annotated as Maybe. Each element in the model is represented by a boolean variable. The formula $\Phi_a$ captures the set of variables that make up the model as well as their annotations. In particular, $\Phi_a$ is a conjunction of $N_1(N_1 + 1)$ terms, one for each element. If an element $v_\alpha$ is annotated as True, its corresponding term is the non-negated variable $v_\alpha$. If it is annotated as False, its term is $\neg v_\alpha$, and if it is annotated as Maybe– $(v_\alpha \lor \neg v_\alpha)$, This tautological disjunction is necessary for $v_\alpha$ to be considered by the SAT solver even if it doesn’t appear elsewhere in $\Phi_r$.
Each model is accompanied by the formula $\Phi_e$ that captures its set of concretizations. $\Phi_e$ is a disjunction of $N_3$ unique sub-formulas representing individual concretizations. Each one is a conjunction of the $N_2$ Maybe variables, a random number of which is negated. This way, each sub-formula defines an allowable configuration of Maybe elements.
Defining specific values for $N_1$ and $N_3$, we were able to generate models for each of the combinations of the parameters in Tables II and III.
To generate formulas $\Phi_p$ that simulate grounded FOL properties, we used property “templates”. For example, to capture the (trivial) FOL formula $\Phi_{ex} = \exists x, y : x \Rightarrow y$, we created the template “$X$ implies $Y$”. Given a partial model with elements represented by the set of four variables \{v1, v2, v3, v4\}, the propositional formula that corresponds to grounding $\Phi_{ex}$ over the vocabulary of the model is created as a randomly instantiated disjunction of copies of the template, e.g., “(v3 implies v2) or (v1 implies v4)”. To run experiments, our goal was creating templates for realistic properties such as the ownership relationship. For example, the template “(not X) implies (not Y)” indicates that Y cannot exist without its “owner” element, X.
Each template was repeated $N_6$ times, with $N_6$ large enough so that $\Phi_p$ contains $N_4$ variables, out of which $N_5$ correspond to Maybe elements. Preliminary results by pilot experiments indicated that these parameters did not significantly affect the observed times and therefore in the generated inputs we fixed them to $N_4 = 0.1 \times N_1$ and $N_5 = \min(N_2, 0.05 \times N_1)$.
To create properties in the FE (“fully existential”) category, the template is repeated as a series of $N_6$ disjunctions and for FA properties – as a series of $N_6$ conjunctions. EA properties were generated as $N_7$ disjunctions of conjunctions of $N_6$ instantiations of the template, where $N_7$ and $N_6$ were random numbers s.t. $N_7 \times N_6 = N_6$. Similarly, AE properties were comprised of $N_7$ conjunctions of disjunctions of $N_6$ instantiations of the template.
**Figure 5.** A randomly generated input in MathSAT’s encoding language.
For each run, we used the generated input to execute the two experiments, E1 and E2. For each, we recorded the speedup $S_p = \frac{T_s}{T_{pm}}$, where $T_s$ and $T_{pm}$ were the times to do a task with sets of classical models and with partial models.
Results. The experiments\textsuperscript{1} did not show dramatic differences in speedup between the different property and return types. The biggest difference in speedup for E1 was recorded in the AE category between properties that return Maybe (21.65) and those that return False (29.13), for M-sized models with Large sets of concretizations. For E2, the biggest difference in speedup was recorded for S-sized models with XL sets of concretizations, for properties that return Maybe, between the EA (0.36) and AE categories (5.62). This indicates that property and return types are not the prime determinants for the performance of our approach.
On the other hand, the size of the partial model and the size of the set of concretizations had a much larger effect on the recorded variance of speedup. The ranges of recorded speedups for E1 and E2 are shown in Fig. 6(a, b), respectively. The plotted values are averages for the type of property and return value for each combination of size of model and size of set of concretizations. This is an indication that these parameters are the most important factors for studying the effectiveness of reasoning with partial models.
Fig. 6(a) shows that for verification and simple diagnostic tasks, such as producing a counter-example, there is a significant speedup from using partial models. The smallest speedups were observed in the inputs with S sets of concretizations (between 2.45 for S-sized models and 2.59 for L-sized models). The increase from these values was dramatic for M, L and XL sets of concretizations. For these categories, the smallest speedup was 19.72 for XL-sized models with M sets of concretizations and the biggest speedup was 30.49 for M-sized models with XL sets of concretizations.
For more complex tasks, such as property-driven refinement, the effect of the size of concretizations, as shown in Fig. 6(b), seems to be the determinant parameter, as the technique offers a speedup greater than 1 for larger sets of concretizations. Our approach was significantly slow for M-sized models with XL sets of concretizations and smaller models (3.30 for S, 2.25 for M and 1.78 for L). This points to the conclusion that for more complex tasks, speedup is best for smaller models with larger sets of concretizations.
These observations, lead us to the conclusion that, regarding RQ1 (feasibility), there is a significant net gain from using our approach for tasks like verification and counter-example guided diagnosis, whereas for tasks like property-driven refinement there are certain cases where it is preferable to use the classical approach.
Regarding RQ2 (sensitivity to degree of uncertainty), the observations point to the conclusion that the speedup offered by our approach is positively correlated to the degree of uncertainty. In fact, the greatest speedups were observed for inputs that had bigger sizes of sets of concretizations. For smaller levels of uncertainty, explicitly handling the set is more efficient.
These results, albeit preliminary, are encouraging and motivate further research, as they indicate that partial models are a useful representation that can offer significant gains compared to handling the entire set of concretizations.
Threats to Validity. The most important threat to validity stems from the use of experimental inputs that were randomly generated. The formulas that we created for properties were randomly grounded and were generated from a few arbitrarily defined templates.
Another threat to validity is induced by our choice to use a few exemplar values of the experimental parameters in order to manage the combinatorial explosion of options. It is evident that more experimentation is required, to generalize our results and further investigate effects of the experimental parameters that may not have been made obvious by our set of experiments.
To compensate for these threats to validity, we additionally conducted a Case Study, to triangulate our experimental results with experience from applying our technique to a real world application. The size of the models that we extracted from the Case Study fell in the XL category, with M and L sets of concretizations, whereas the properties were in the FE category and returned True and Maybe. The observed speedups (detailed in the next section) were consistent with our experimental results.
VI. Case Study
Problem Description. In this case study, we aim to illustrate the following MDE software maintenance scenario: An engineer is given the task of fixing a software defect by modifying
\textsuperscript{1}All results available at http://www.cs.toronto.edu/~famelis/icse12.html
its UML model which will subsequently be used to construct the modified software (e.g., via a transformation). However, after creating the modifications to the model, the engineer finds that some model constraints are violated and thus the software cannot be constructed. For example, she may have modified a sequence diagram without properly synchronizing it with the structural aspects (e.g., classes) of the model. To help her resolve these constraint violations, she uses a tool that can automatically propose different model repair alternatives (e.g., [12]). Suppose the engineer is uncertain about which alternative to choose because their relative merits are unclear – and thus she would like to reason with the set of alternatives to help her make the choice and possibly even defer the decision until more information is available. In this case study, we apply the partiality techniques developed in this paper to show how they could help her in this scenario and to demonstrate the feasibility of the approach.
We use an open source project UMLet [23], which is a simple Java-based UML editor, as the software on which our user is requested to perform a maintenance task. This project has also been used by Van Der Straeten et al. for finding model inconsistencies with a model finder [24]. The goal of the maintenance task is to fix the following bug, referred to as Issue 10 on the online issue log [22]: “copied items should have a higher z-order priority”. That is, if the user copies and then pastes an item within the editor, it is not the topmost item if it overlaps with existing other items. Thus, any fix to the bug must satisfy the following property P1: “Each item that is pasted from the clipboard must have z-order = 0.” The paste functionality is implemented in UMLet by instantiating the class Paste and invoking its execute operation. Fig. 7 shows a fragment of the sequence diagram, generated from the code using the Borland TogetherJ tool [1] for execute with the circled portion representing a bug fix we propose. The full sequence diagram has 12 objects, 53 messages and 8 statement blocks. Although UMLet has 214 classes in total, we restrict ourselves to a slice that covers the sequence execute consisting of 6 classes (plus 5 Java library classes) with 44 operations. Of the 12 objects in the sequence diagram, 5 are instances of Java library classes and 7 — of UMLet classes. In the fragment shown, the for loop statement block iterates through every item in the clipboard (indexed by variable e) and adds it to the editor window (represented by the object pnl:DrawPanel). When an entity is added to a DrawPanel, the z-order is not set to 0 by default, causing the bug. In our proposed fix (shown in the dashed circle), we create a transient object positioner and tell it to moveToTop(e), using the Swing operation setComponentZOrder.
Inconsistencies and the Partial Model. Our fix is conceptually correct but it violates two consistency rules required for code generation:
1) ClasslessInstance: Every object must have a class. Possible repairs:
- **RC1**: Remove the object.
- **RC2**: (obj) Replace the object with an existing object obj that has a class.
2) DanglingOperation: The operation used by a message in a sequence diagram must be an operation of the class of the receiving object. Possible repairs:
- **RD1**: Put the operation into the receiving object’s class.
- **RD2**: (op) Change the operation to the operation op that is already in the receiving object’s class.
- **RD3**: Remove the message.
ClasslessInstance and DanglingOperation are both based on [21]. In our case, the positioner object violates ClasslessInstance and the message with operation moveToTop violates DanglingOperation because it is not in positioner’s class (since positioner has no class).
If we apply all possible repairs, we get a set of alternative ways to fix the inconsistency, summarized as follows:
1) Positioner can be removed (RC1), can be replaced by one of the existing 7 objects (RC2), can be assigned to one of the existing 6 classes (RC3), or can be an instance of a new class (RC4).
2) The operation moveToTop can be added to the positioner’s class (RD1), can be changed to one of the other 44 operations depending on positioner’s class (RD2), or can be removed (RD3).
Only certain repairs are mutually compatible – for example, RC1 cannot be used with RD2 since the latter depends on positioner’s class but the former removes positioner entirely. There are 220 alternatives in total for all valid combinations.
If we construct a partial model to represent this set of alternatives, all the model elements in the proposed fix in Fig. 7 become Maybe since they are present in some alternatives and absent in others. Furthermore, based on the compatible combinations of repairs, the may formula portion of the partial model is expressed as
$$\phi_M = Choose(\{\phi_{RC1} \land \phi_{RD3}, \phi_{RC2(\cdot)} \land \phi_{RD1}, \phi_{RC2(\cdot)} \land \phi_{RD2(\text{setX})}, \ldots\})$$
where Choose( $\phi_1, \ldots, \phi_n$ ) is a logical function that holds when exactly one of $\{\phi_i\}_{1 \leq i \leq n}$ hold. Each of the formulas
for the individual repairs can be further expanded and expressed in terms of the UML 2 metamodel [16]. For example, $\phi_{RC21(e)}$ represents the condition that object positioner is replaced by object $e$ in Fig. 7, expressed as
$$\phi_{RC21(e)} = \text{covered}(\text{receiveEvent}(\text{Message}_{1.43})) = \text{lifeline}_e$$
which says that the lifeline covered by the receiving event of message 1.43 is the one for object $e$.
**Analysis.** Having defined a partial model whose set of concretizations are the possible alternative ways of making our bug fix consistent with the required rules, we can use the techniques discussed in Sec. IV to reason about the alternatives using properties. The first question is whether any of the alternatives “break” the paste functionality. For example, consider the property P2: “Whenever an item is pasted, a new item is created in the editor window” which should hold if the paste functionality is implemented correctly. To check this against the partial model, we encode it into a propositional formula to represent only the satisfying concretizations by setting the property-guided refinement (OP4) to refine our partial model representing the counterexamples to P1 by setting the $z$-order is never set to 0. In a similar way, we used $\text{moveToTop}$ GNFT. In the resulting may model, the $\text{moveToTop}$ is absent and thus is necessary for P1 to hold.
Third, partial models are first-class development artifacts that can be manipulated throughout the software engineering lifecycle with detail-adding approaches keep the expressions of variability in a separate feature model but some incorporate these directly into the model using notational extensions in the metamodel [13]. Featured Transition Systems (FTSs) [4] are most closely related to the notion of partial models presented in this paper. FTSs encode a set of products by annotating transitions with specific features from a feature diagram (much like our may formula), and differ from MTSs and DMTSs in that they support precise representation and reasoning with a set of models.
Our approach is distinct from related work in a number of important ways. First, it applies to any kind of modeling language (not just behavioral models) that can be defined using a metamodel. Second, our viewpoint is the comprehensive handling of uncertainty rather than just reasoning over variability. In this context, partial models support changes in the level of uncertainty, with tasks such as property-driven (OP4) and more generally uncertainty-removing refinement [20].
VIII. CONCLUSION AND FUTURE WORK
This paper presented an approach for reasoning in the presence of uncertainty. We showed how to construct partial models to represent sets of alternatives and how to use them for reasoning. We evaluated the approach by running experiments using randomly generated inputs and triangulated our results with a case study dealing with alternative repairs to inconsistency for a real world software project. Our evaluation, while preliminary, showed that in the presence of high degrees of uncertainty, using partial models offers significant improvements for reasoning tasks.
Our work is part of a broader research agenda, outlined in [7]. Our next steps include studying how partial models can be used as first-class development items. In particular, we want to investigate model transformation of partial models, as well as the effects of transformation on the properties of the concretizations.
REFERENCES
|
{"Source-Url": "https://michalis.famelis.info/wp-content/uploads/2009/12/icse121.pdf", "len_cl100k_base": 12632, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43177, "total-output-tokens": 14857, "length": "2e13", "weborganizer": {"__label__adult": 0.0002875328063964844, "__label__art_design": 0.0004055500030517578, "__label__crime_law": 0.00025963783264160156, "__label__education_jobs": 0.0013036727905273438, "__label__entertainment": 5.811452865600586e-05, "__label__fashion_beauty": 0.00014030933380126953, "__label__finance_business": 0.0002434253692626953, "__label__food_dining": 0.0002796649932861328, "__label__games": 0.0005025863647460938, "__label__hardware": 0.0005125999450683594, "__label__health": 0.0003528594970703125, "__label__history": 0.00023555755615234375, "__label__home_hobbies": 9.715557098388672e-05, "__label__industrial": 0.0003402233123779297, "__label__literature": 0.0003159046173095703, "__label__politics": 0.00019216537475585935, "__label__religion": 0.0003914833068847656, "__label__science_tech": 0.0181884765625, "__label__social_life": 9.97781753540039e-05, "__label__software": 0.006374359130859375, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00024187564849853516, "__label__transportation": 0.0004742145538330078, "__label__travel": 0.00017404556274414062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56863, 0.01644]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56863, 0.55164]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56863, 0.88941]], "google_gemma-3-12b-it_contains_pii": [[0, 3698, false], [3698, 9610, null], [9610, 16540, null], [16540, 21979, null], [21979, 30032, null], [30032, 35076, null], [35076, 39831, null], [39831, 44499, null], [44499, 49689, null], [49689, 53202, null], [53202, 56863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3698, true], [3698, 9610, null], [9610, 16540, null], [16540, 21979, null], [21979, 30032, null], [30032, 35076, null], [35076, 39831, null], [39831, 44499, null], [44499, 49689, null], [49689, 53202, null], [53202, 56863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56863, null]], "pdf_page_numbers": [[0, 3698, 1], [3698, 9610, 2], [9610, 16540, 3], [16540, 21979, 4], [21979, 30032, 5], [30032, 35076, 6], [35076, 39831, 7], [39831, 44499, 8], [44499, 49689, 9], [49689, 53202, 10], [53202, 56863, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56863, 0.02362]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0797f811a67126da1d302aed2d64cf5f8bf05e41
|
Developing a Linux Kernel module using RDMA for GPUDirect
Application Guide
Table of Contents
Chapter 1. Overview..............................................................................................................1
1.1. How GPUDirect RDMA Works.................................................................1
1.2. Standard DMA Transfer........................................................................2
1.3. GPUDirect RDMA Transfers..................................................................3
1.4. Changes in CUDA 6.0.........................................................................3
1.5. Changes in CUDA 7.0........................................................................4
1.6. Changes in CUDA 8.0........................................................................4
1.7. Changes in CUDA 10.1..............................................................5
1.8. Changes in CUDA 11.2..............................................................5
Chapter 2. Design Considerations.......................................................................................6
2.1. Lazy Unpinning Optimization......................................................6
2.2. Registration Cache............................................................................6
2.3. Unpin Callback.....................................................................................7
2.4. Supported Systems...............................................................................8
2.5. PCI BAR sizes.....................................................................................9
2.6. Tokens Usage.....................................................................................9
2.7. Synchronization and Memory Ordering.......................................10
Chapter 3. How to Perform Specific Tasks.......................................................................12
3.1. Displaying GPU BAR space..........................................................12
3.2. Pinning GPU memory.......................................................................12
3.3. Unpinning GPU memory.............................................................13
3.4. Handling the free callback.........................................................14
3.5. Buffer ID Tag Check for A Registration Cache........................14
3.6. Linking a Kernel Module against nvidia.ko...............................15
Chapter 4. References.....................................................................................................17
4.1. Basics of UVA CUDA Memory Management................................17
4.2. Userspace API...............................................................................18
4.3. Kernel API.....................................................................................20
4.4. Porting to Tegra.............................................................................23
List of Figures
Figure 1. GPUDirect RDMA within the Linux Device Driver Model ................................................1
Figure 2. CUDA VA Space Addressing .............................................................................................17
Chapter 1. Overview
GPUDirect RDMA is a technology introduced in Kepler-class GPUs and CUDA 5.0 that enables a direct path for data exchange between the GPU and a third-party peer device using standard features of PCI Express. Examples of third-party devices are: network interfaces, video acquisition devices, storage adapters.
GPUDirect RDMA is available on both Tesla and Quadro GPUs.
A number of limitations can apply, the most important being that the two devices must share the same upstream PCI Express root complex. Some of the limitations depend on the platform used and could be lifted in current/future products.
A few straightforward changes must be made to device drivers to enable this functionality with a wide range of hardware devices. This document introduces the technology and describes the steps necessary to enable an GPUDirect RDMA connection to NVIDIA GPUs on Linux.
Figure 1. GPUDirect RDMA within the Linux Device Driver Model
1.1. How GPUDirect RDMA Works
When setting up GPUDirect RDMA communication between two peers, all physical addresses are the same from the PCI Express devices’ point of view. Within this physical address space are linear windows called PCI BARs. Each device has six BAR registers at most, so it can have
up to six active 32bit BAR regions. 64bit BARs consume two BAR registers. The PCI Express device issues reads and writes to a peer device’s BAR addresses in the same way that they are issued to system memory.
Traditionally, resources like BAR windows are mapped to user or kernel address space using the CPU’s MMU as memory mapped I/O (MMIO) addresses. However, because current operating systems don’t have sufficient mechanisms for exchanging MMIO regions between drivers, the NVIDIA kernel driver exports functions to perform the necessary address translations and mappings.
To add GPUDirect RDMA support to a device driver, a small amount of address mapping code within the kernel driver must be modified. This code typically resides near existing calls to `get_user_pages()`.
The APIs and control flow involved with GPUDirect RDMA are very similar to those used with standard DMA transfers.
See Supported Systems and PCI BAR sizes for more hardware details.
### 1.2. Standard DMA Transfer
First, we outline a standard DMA Transfer initiated from userspace. In this scenario, the following components are present:
- Userspace program
- Userspace communication library
- Kernel driver for the device interested in doing DMA transfers
The general sequence is as follows:
1. The userspace program requests a transfer via the userspace communication library. This operation takes a pointer to data (a virtual address) and a size in bytes.
2. The communication library must make sure the memory region corresponding to the virtual address and size is ready for the transfer. If this is not the case already, it has to be handled by the kernel driver (next step).
3. The kernel driver receives the virtual address and size from the userspace communication library. It then asks the kernel to translate the virtual address range to a list of physical pages and make sure they are ready to be transferred to or from. We will refer to this operation as pinning the memory.
4. The kernel driver uses the list of pages to program the physical device’s DMA engine[s].
5. The communication library initiates the transfer.
6. After the transfer is done, the communication library should eventually clean up any resources used to pin the memory. We will refer to this operation as unpinning the memory.
1.3. GPUDirect RDMA Transfers
For the communication to support GPUDirect RDMA transfers some changes to the sequence above have to be introduced. First of all, two new components are present:
- Userspace CUDA library
- NVIDIA kernel driver
As described in Basics of UVA CUDA Memory Management, programs using the CUDA library have their address space split between GPU and CPU virtual addresses, and the communication library has to implement two separate paths for them.
The userspace CUDA library provides a function that lets the communication library distinguish between CPU and GPU addresses. Moreover, for GPU addresses it returns additional metadata that is required to uniquely identify the GPU memory represented by the address. See Userspace API for details.
The difference between the paths for CPU and GPU addresses is in how the memory is pinned and unpinned. For CPU memory this is handled by built-in Linux Kernel functions (get_user_pages() and put_page()). However, in the GPU memory case the pinning and unpinning has to be handled by functions provided by the NVIDIA Kernel driver. See Pinning GPU memory and Unpinning GPU memory for details.
Some hardware caveats are explained in Supported Systems and PCI BAR sizes.
1.4. Changes in CUDA 6.0
In this section we briefly list the changes that are available in CUDA 6.0:
- CUDA peer-to-peer tokens are no longer mandatory. For memory buffers owned by the calling process (which is typical) tokens can be replaced by zero (0) in the kernel-mode function nvidia_p2p_get_pages(). This new feature is meant to make it easier for existing third party software stacks to adopt RDMA for GPUDirect.
- As a consequence of the change above, a new API cuPointerSetAttribute() has been introduced. This API must be used to register any buffer for which no peer-to-peer tokens are used. It is necessary to ensure correct synchronization behavior of the CUDA API when operation on memory which may be read by RDMA for GPUDirect. Failing to use it in these cases may cause data corruption. See changes in Tokens Usage.
- cuPointerGetAttribute() has been extended to return a globally unique numeric identifier, which in turn can be used by lower-level libraries to detect buffer reallocations happening in user-level code (see Userspace API). It provides an alternative method to detect reallocations when intercepting CUDA allocation and deallocation APIs is not possible.
- The kernel-mode memory pinning feature has been extended to work in combination with Multi-Process Service (MPS).
Caveats as of CUDA 6.0:
CUDA Unified Memory is not explicitly supported in combination with GPUDirect RDMA. While the page table returned by `nvidia_p2p_get_pages()` is valid for managed memory buffers and provides a mapping of GPU memory at any given moment in time, the GPU device copy of that memory may be incoherent with the writable copy of the page which is not on the GPU. Using the page table in this circumstance may result in accessing stale data, or data loss, because of a DMA write access to device memory that is subsequently overwritten by the Unified Memory run-time. `cuPointerGetAttribute()` may be used to determine if an address is being managed by the Unified Memory runtime.
Every time a device memory region is pinned, new GPU BAR space is allocated unconditionally, even when pinning overlapping or duplicate device memory ranges, i.e. there is no attempt at reusing mappings. This behavior has been changed since CUDA 7.0.
### 1.5. Changes in CUDA 7.0
In this section we briefly list the changes that are available in CUDA 7.0:
- On the IBM POWER8 platform, GPUDirect RDMA is not supported, though it is not explicitly disabled.
- GPUDirect RDMA is not guaranteed to work on any given ARM64 platform.
- Management of GPU BAR mappings has been improved with respect to CUDA 6.0. Now when a device memory region is pinned, GPU BAR space might be shared with pre-existing mappings. This is the case for example when pinning overlapping or duplicate device memory ranges. As a consequence, when unpinning a region, its whole BAR space will not be returned if even only a subset of its BAR space is shared.
- The new `cuPointerGetAttributes()` API has been introduced. It can be useful when retrieving multiple attributes for the same buffer, e.g. in MPI when examining a new buffer.
- `cudaPointerGetAttributes()` is now faster since it leverages `cuPointerGetAttributes()` internally.
- A new sample code, `samples/7_CUDALibraries/cuHook`, has been added in CUDA 6.5. It can be used as a template for implementing an interception framework for CUDA memory de/allocation APIs.
### 1.6. Changes in CUDA 8.0
In this section we briefly list the changes that are available in CUDA 8.0:
- The `nvidia_p2p_page_table` struct has been extended to include a new member, without breaking binary compatibility. The minor version in the `NVIDIA_P2P_PAGE_TABLE_VERSION` macro has been updated accordingly.
- The `nvidia_p2p_dma_mapping` structure, the `nvidia_p2p_dma_map_pages()` and `nvidia_p2p_dma_unmap_pages()` APIs, the `NVIDIA_P2P_DMA_MAPPING_VERSION` macro have been introduced. These APIs can be used by third party device drivers to map and unmap the GPU BAR pages into their device’s I/O address space. The main use case is on platforms where the I/O addresses of PCIe resources, used for PCIe peer-to-peer
transactions, are different from the physical addresses used by the CPU to access those same resources. See this link for an example of code using these new APIs.
- The NVIDIA_P2P_PAGE_TABLE_VERSION_COMPATIBLE and NVIDIA_P2P_DMA_MAPPING_VERSION_COMPATIBLE macros have been introduced. These are meant to be called by third-party device drivers to check for runtime binary compatibility, for example in case of changes to the data structure’s layout.
- On the IBM POWER8 platform, when using the above APIs, GPUDirect RDMA is reported to work correctly restricted to the case where the GPU and the third party device are connected through a supported PCIe switch.
1.7. Changes in CUDA 10.1
GPUDirect RDMA is supported on Jetson AGX Xavier platform. See Porting to Tegra section for details.
1.8. Changes in CUDA 11.2
GPUDirect RDMA is supported on Drive AGX Xavier Linux based platform. See Porting to Tegra section for details.
Chapter 2. Design Considerations
When designing a system to utilize GPUDirect RDMA, there a number of considerations which should be taken into account.
2.1. Lazy Unpinning Optimization
Pinning GPU device memory in BAR is an expensive operation, taking up to milliseconds. Therefore the application should be designed in a way to minimize that overhead.
The most straightforward implementation using GPUDirect RDMA would pin memory before each transfer and unpin it right after the transfer is complete. Unfortunately, this would perform poorly in general, as pinning and unpinning memory are expensive operations. The rest of the steps required to perform an RDMA transfer, however, can be performed quickly without entering the kernel (the DMA list can be cached and replayed using MMIO registers/command lists).
Hence, lazily unpinning memory is key to a high performance RDMA implementation. What it implies, is keeping the memory pinned even after the transfer has finished. This takes advantage of the fact that it is likely that the same memory region will be used for future DMA transfers thus lazy unpinning saves pin/unpin operations.
An example implementation of lazy unpinning would keep a set of pinned memory regions and only unpin some of them (for example the least recently used one) if the total size of the regions reached some threshold, or if pinning a new region failed because of BAR space exhaustion [see PCI BAR sizes].
2.2. Registration Cache
Communication middleware often employs an optimization called a registration cache, or pin-down cache, to minimize pinning overhead. Typically it already exists for host memory, implementing lazy unpinning, LRU de-registration, etc. For networking middleware, such caches are usually implemented in user-space, as they are used in combination with hardware capable of user-mode message injection. CUDA UVA memory address layout enables GPU memory pinning to work with these caches by taking into account just a few design considerations. In the CUDA environment, this is even more important as the amount of memory which can be pinned may be significantly more constrained than for host memory.
As the GPU BAR space is typically mapped using 64KB pages, it is more resource efficient to maintain a cache of regions rounded to the 64KB boundary. Even more so, as two memory areas which are in the same 64KB boundary would allocate and return the same BAR mapping.
Registration caches usually rely on the ability to intercept deallocation events happening in the user application, so that they can unpin the memory and free important HW resources, e.g. on the network card. To implement a similar mechanism for GPU memory, an implementation has two options:
- Instrument all CUDA allocation and deallocation APIs.
- Use a tag check function to track deallocation and reallocation. See Buffer ID Tag Check for A Registration Cache.
There is a sample application, 7_CUDALibraries/cuHook, showing how to intercept calls to CUDA APIs at run-time, which can be used to detect GPU memory de/allocations.
While intercepting CUDA APIs is beyond the scope of this document, an approach to performing tag checks is available starting with CUDA 6.0. It involves the usage of the CU_POINTER_ATTRIBUTE_BUFFER_ID attribute in cuPointerGetAttribute() (or cuPointerGetAttributes() if more attributes are needed) to detect memory buffer deallocations or reallocations. The API will return a different ID value in case of reallocation or an error if the buffer address is no longer valid. See Userspace API for API usage.
Note: Using tag checks introduces an extra call into the CUDA API on each memory buffer use, so this approach is most appropriate when the additional latency is not a concern.
### 2.3. Unpin Callback
When a third party device driver pins the GPU pages with nvidia_p2p_get_pages() it must also provide a callback function that the NVIDIA driver will call if it needs to revoke access to the mapping. **This callback occurs synchronously**, giving the third party driver the opportunity to clean up and remove any references to the pages in question [i.e., wait for outstanding DMAs to complete]. **The user callback function may block for a few milliseconds**, although it is recommended that the callback complete as quickly as possible. Care has to be taken not to introduce deadlocks as waiting within the callback for the GPU to do anything is not safe.
The callback must call nvidia_p2p_free_page_table() (not nvidia_p2p_put_pages()) to free the memory pointed to by page_table. The corresponding mapped memory areas will only be unmapped by the NVIDIA driver after returning from the callback.
Note that the callback will be invoked in two scenarios:
- If the userspace program explicitly deallocates the corresponding GPU memory, e.g. cuMemFree, cuCtxDestroy, etc. before the third party kernel driver has a chance to unpin the memory with nvidia_p2p_put_pages().
- As a consequence of an early exit of the process.
In the latter case there can be tear-down ordering issues between closing the file descriptor of the third party kernel driver and that of the NVIDIA kernel driver. In the case the file descriptor
for the NVIDIA kernel driver is closed first, the `nvidia_p2p_put_pages()` callback will be invoked.
A proper software design is important as the NVIDIA kernel driver will protect itself from reentrancy issues with locks before invoking the callback. The third party kernel driver will almost certainly take similar actions, so dead-locking or live-locking scenarios may arise if careful consideration is not taken.
### 2.4. Supported Systems
#### General remarks
Even though the only theoretical requirement for GPUDirect RDMA to work between a third-party device and an NVIDIA GPU is that they share the same root complex, there exist bugs (mostly in chipsets) causing it to perform badly, or not work at all in certain setups.
We can distinguish between three situations, depending on what is on the path between the GPU and the third-party device:
- PCIe switches only
- single CPU/IOH
- CPU/IOH <-> QPI/HT <-> CPU/IOH
The first situation, where there are only PCIe switches on the path, is optimal and yields the best performance. The second one, where a single CPU/IOH is involved, works, but yields worse performance (especially peer-to-peer read bandwidth has been shown to be severely limited on some processor architectures). Finally, the third situation, where the path traverses a QPI/HT link, may be extremely performance-limited or even not work reliably.
**Tip:** `lspci` can be used to check the PCI topology:
```
$ lspci -t
```
#### Platform support
For IBM POWER8 platform, GPUDirect RDMA and P2P are not supported, but are not explicitly disabled. They may not work at run-time.
GPUDirect RDMA is supported on Jetson AGX Xavier platform starting from CUDA 10.1 and on Drive AGX Xavier Linux based platforms from CUDA 11.2. See section *Porting to Tegra* for details. On ARM64, the necessary peer-to-peer functionality depends on both the hardware and the software of the particular platform. So while GPUDirect RDMA is not explicitly disabled on non-Jetson and non-Drive platforms, there are no guarantees that it will be fully functional.
IOMMUs
GPUDirect RDMA currently relies upon all physical addresses being the same from the different PCI devices’ point of view. This makes it incompatible with IOMMUs performing any form of translation other than 1:1, hence they must be disabled or configured for pass-through translation for GPUDirect RDMA to work.
2.5. PCI BAR sizes
PCI devices can ask the OS/BIOS to map a region of physical address space to them. These regions are commonly called BARs. NVIDIA GPUs currently expose multiple BARs, and some of them can back arbitrary device memory, making GPUDirect RDMA possible.
The maximum BAR size available for GPUDirect RDMA differs from GPU to GPU. For example, currently the smallest available BAR size on Kepler class GPUs is 256 MB. Of that, 32MB are currently reserved for internal use. These sizes may change.
On some Tesla-class GPUs a large BAR feature is enabled, e.g. BAR1 size is set to 16GB or larger. Large BARs can pose a problem for the BIOS, especially on older motherboards, related to compatibility support for 32bit operating systems. On those motherboards the bootstrap can stop during the early POST phase, or the GPU may be misconfigured and so unusable. If this appears to be occurring it might be necessary to enable some special BIOS feature to deal with the large BAR issue. Please consult your system vendor for more details regarding large BAR support.
2.6. Tokens Usage
As can be seen in Userspace API and Kernel API, one method for pinning and unpinning memory requires two tokens in addition to the GPU virtual address. These tokens, p2pToken and vaSpaceToken, are necessary to uniquely identify a GPU VA space. A process identifier alone does not identify a GPU VA space.
The tokens are consistent within a single CUDA context [i.e., all memory obtained through cudaMalloc() within the same CUDA context will have the same p2pToken and vaSpaceToken]. However, a given GPU virtual address need not map to the same context/GPU for its entire lifetime. As a concrete example:
```c
cudaSetDevice(0);
ptr0 = cudaMalloc();
cuPointerGetAttribute(&return_data, CU_POINTER_ATTRIBUTE_P2P_TOKENS, ptr0);
// Returns [p2pToken = 0xabcd, vaSpaceToken = 0x1]
cudaFree(ptr0);
cudaSetDevice(1);
ptr1 = cudaMalloc();
assert(ptr0 == ptr1);
// The CUDA driver is free (although not guaranteed) to reuse the VA,
```
Design Considerations
Developing a Linux Kernel module using RDMA for
GPUDirect
// even on a different GPU
cuPointerGetAttribute(&return_data, CU_POINTER_ATTRIBUTE_P2P_TOKENS, ptr0);
// Returns [p2pToken = 0x0123, vaSpaceToken = 0x2]
That is, the same address, when passed to cuPointerGetAttribute, may return different tokens at different times during the program’s execution. Therefore, the third party communication library must call cuPointerGetAttribute() for every pointer it operates on.
Security implications
The two tokens act as an authentication mechanism for the NVIDIA kernel driver. If you know the tokens, you can map the address space corresponding to them, and the NVIDIA kernel driver doesn’t perform any additional checks. The 64bit p2pToken is randomized to prevent it from being guessed by an adversary.
When no tokens are used, the NVIDIA driver limits the Kernel API to the process which owns the memory allocation.
2.7. Synchronization and Memory Ordering
GPUDirect RDMA introduces a new independent GPU data flow path exposed to third party devices and it is important to understand how these devices interact with the GPU’s relaxed memory model.
- Properly registering a BAR mapping of CUDA memory is required for that mapping to remain consistent with CUDA APIs operations on that memory.
- Only CUDA synchronization and work submission APIs provide memory ordering of GPUDirect RDMA operations.
Registration for CUDA API Consistency
Registration is necessary to ensure the CUDA API memory operations visible to a BAR mapping happen before the API call returns control to the calling CPU thread. This provides a consistent view of memory to a device using GPUDirect RDMA mappings when invoked after a CUDA API in the thread. This is a strictly more conservative mode of operation for the CUDA API and disables optimizations, thus it may negatively impact performance.
This behavior is enabled on a per-allocation granularity either by calling cuPointerSetAttribute() with the CU_POINTER_ATTRIBUTE_SYNC_MEMOPS attribute, or p2p tokens are retrieved for a buffer when using the legacy path. See Userspace API for more details.
An example situation would be Read-after-Write dependency between a cuMemcpyDtoD() and subsequent GPUDirect RDMA read operation on the destination of the copy. As an optimization the device-to-device memory copy typically returns asynchronously to the calling thread after queuing the copy to the GPU scheduler. However, in this circumstance that will lead
to inconsistent data read via the BAR mapping, so this optimization is disabled and the copy completed before the CUDA API returns.
**CUDA APIs for Memory Ordering**
Only CPU initiated CUDA APIs provide ordering of GPUDirect memory operations as observed by the GPU. That is, despite a third party device having issued all PCIe transactions, a running GPU kernel or copy operation may observe stale data or data that arrives out-of-order until a subsequent CPU initiated CUDA work submission or synchronization API. To ensure that memory updates are visible to CUDA kernels or copies, an implementation should ensure that all writes to the GPU BAR happen before control is returned to the CPU thread which will invoke the dependent CUDA API.
An example situation for a network communication scenario is when a network RDMA write operation is completed by the third party network device and the data is written to the GPU BAR mapping. Though reading back the written data either through GPU BAR or a CUDA memory copy operation, will return the newly written data, a concurrently running GPU kernel to that network write might observe stale data, the data partially written, or the data written out-of-order.
In short, a GPU kernel is wholly inconsistent with concurrent RDMA for GPUDirect operations and accessing the memory overwritten by the third party device in such a situation would be considered a data race. To resolve this inconsistency and remove the data race the DMA write operation must complete with respect to the CPU thread which will launch the dependent GPU kernel.
Chapter 3. How to Perform Specific Tasks
3.1. Displaying GPU BAR space
Starting in CUDA 6.0 the NVIDIA SMI utility provides the capability to dump BAR1 memory usage. It can be used to understand the application usage of BAR space, the primary resource consumed by GPUDirect RDMA mappings.
$ nvidia-smi -q
...
BAR1 Memory Usage
Total : 256 MiB
Used : 2 MiB
Free : 254 MiB
...
GPU memory is pinned in fixed size chunks, so the amount of space reflected here might be unexpected. In addition, a certain amount of BAR space is reserved by the driver for internal use, so not all available memory may be usable via GPUDirect RDMA. Note that the same ability is offered programmatically through the \texttt{nvmlDeviceGetBAR1MemoryInfo()} NVML API.
3.2. Pinning GPU memory
1. Correct behavior requires using \texttt{cuPointerSetAttribute()} on the memory address to enable proper synchronization behavior in the CUDA driver. See section Synchronization and Memory Ordering.
```c
void pin_buffer(void *address, size_t size)
{
unsigned int flag = 1;
CUresult status = cuPointerSetAttribute(&flag,
CU_POINTER_ATTRIBUTE_SYNC_MEMOPS, address);
if (CUDA_SUCCESS == status) {
// GPU path
pass_to_kernel_driver(address, size);
} else {
// CPU path
// ...
}
}
```
This is required so that the GPU memory buffer is treated in a special way by the CUDA driver, so that CUDA memory transfers are guaranteed to always be synchronous with respect to the host. See Userspace API for details on cuPointerSetAttribute().
2. In the kernel driver, invoke nvidia_p2p_get_pages().
```c
// for boundary alignment requirement
#define GPU_BOUND_SHIFT 16
#define GPU_BOUND_SIZE ((u64)1 << GPU_BOUND_SHIFT)
#define GPU_BOUND_OFFSET (GPU_BOUND_SIZE-1)
#define GPU_BOUND_MASK (~GPU_BOUND_OFFSET)
struct kmd_state {
nvidia_p2p_page_table_t *page_table;
// ...
};
void kmd_pin_memory(struct kmd_state *my_state, void *address, size_t size) {
// do proper alignment, as required by NVIDIA kernel driver
u64 virt_start = address & GPU_BOUND_MASK;
size_t pin_size = address + size - virt_start;
if (!size)
return -EINVAL;
int ret = nvidia_p2p_get_pages(0, 0, virt_start, pin_size, &my_state->page_table, free_callback, &my_state);
if (ret == 0) {
// Successfully pinned, page_table can be accessed
} else {
// Pinning failed
}
}
```
Note how the start address is aligned to a 64KB boundary before calling the pinning functions.
If the function succeeds the memory has been pinned and the page_table entries can be used to program the device’s DMA engine. See Kernel API for details on nvidia_p2p_get_pages().
### 3.3. Unpinning GPU memory
In the kernel driver, invoke nvidia_p2p_put_pages().
```c
void unpin_memory(void *address, size_t size, nvidia_p2p_page_table_t *page_table) {
nvidia_p2p_put_pages(0, 0, address, size, page_table);
}
```
See Kernel API for details on nvidia_p2p_put_pages().
Starting CUDA 6.0 zeros should be used as the token parameters. Note that nvidia_p2p_put_pages() must be called from within the same process context as the one from which the corresponding nvidia_p2p_get_pages() has been issued.
3.4. Handling the free callback
1. The NVIDIA kernel driver invokes `free_callback(data)` as specified in the `nvidia_p2p_get_pages()` call if it needs to revoke the mapping. See Kernel API and Unpin Callback for details.
2. The callback waits for pending transfers and then cleans up the page table allocation.
```c
void free_callback(void *data)
{
my_state *state = data;
wait_for_pending_transfers(state);
nvidia_p2p_free_pages(state->page_table);
}
void wait_for_pending_transfers(void *state)
{
// Wait for pending transfers
}
``
3. The NVIDIA kernel driver handles the unmapping so `nvidia_p2p_put_pages()` should not be called.
3.5. Buffer ID Tag Check for A Registration Cache
Remember that a solution built around Buffer ID tag checking is not recommended for latency sensitive implementations. Instead, instrumentation of CUDA allocation and deallocation APIs to provide callbacks to the registration cache is recommended, removing tag checking overhead from the critical path.
1. The first time a device memory buffer is encountered and recognized as not yet pinned, the pinned mapping is created and the associated buffer ID is retrieved and stored together in the cache entry. The `cuMemGetAddressRange()` function can be used to obtain the size and starting address for the whole allocation, which can then be used to pin it. As `nvidia_p2p_get_pages()` will need a pointer aligned to 64K, it is useful to directly align the cached address. Also, as the BAR space is currently mapped in chunks of 64KB, it is more resource efficient to round the whole pinning to 64KB.
```c
// struct buf represents an entry of the registration cache
struct buf {
CUdeviceptr pointer;
size_t size;
CUdeviceptr aligned_pointer;
size_t aligned_size;
int is_pinned;
uint64_t id; // buffer id obtained right after pinning
};
// Once created, every time a registration cache entry will be used it must be first checked for validity. One way to do this is to use the Buffer ID provided by CUDA as a tag to check for deallocation or reallocation.
```
int buf_is_gpu_pinning_valid(struct buf* buf) {
uint64_t buffer_id;
int retcode;
assert(buf->is_pinned);
// get the current buffer id
retcode = cuPointerGetAttribute(&buffer_id, CU_POINTER_ATTRIBUTE_BUFFER_ID, buf->pointer);
if (CUDA_ERROR_INVALID_VALUE == retcode) {
// the device pointer is no longer valid
// it could have been deallocated
return ERROR_INVALIDATED;
} else if (CUDA_SUCCESS != retcode) {
// handle more serious errors here
return ERROR_SERIOUS;
}
if (buf->id != buffer_id)
// the original buffer has been deallocated and the cached mapping should be
// invalidated and the buffer re-pinned
return ERROR_INVALIDATED;
return 0;
}
When the buffer identifier changes the corresponding memory buffer has been reallocated
so the corresponding kernel-space page table will not be valid anymore. In this case the
kernel-space nvidia_p2p_get_pages() callback would have been invoked. Thus the
Buffer IDs provide a tag to keep the pin-down cache consistent with the kernel-space page
table without requiring the kernel driver to up-call into the user-space.
If CUDA_ERROR_INVALID_VALUE is returned from cuPointerGetAttribute(), the
program should assume that the memory buffer has been deallocated or is otherwise not
a valid GPU memory buffer.
3. In both cases, the corresponding cache entry must be invalidated.
// in the registration cache code
if (buf->is_pinned && !buf_is_gpu_pinning_valid(buf)) {
regcache_invalidate_entry(buf);
pin_buffer(buf);
}
3.6. Linking a Kernel Module against nvidia.ko
1. Run the extraction script:
./NVIDIA-Linux-x86_64-<version>.run -x
This extracts the NVIDIA driver and kernel wrapper.
2. Navigate to the output directory:
cd <output directory>/kernel/
3. Within this directory, build the NVIDIA module for your kernel:
make module
After this is done, the Module.symvers file under your kernel build directory contains
symbol information for nvidia.ko.
4. Modify your kernel module build process with the following line:
```bash
KBUILD_EXTRA_SYMBOLS := <path to kernel build directory>/Module.symvers
```
Chapter 4. References
4.1. Basics of UVA CUDA Memory Management
Unified virtual addressing (UVA) is a memory address management system enabled by default in CUDA 4.0 and later releases on Fermi and Kepler GPUs running 64-bit processes. The design of UVA memory management provides a basis for the operation of GPUDirect RDMA. On UVA-supported configurations, when the CUDA runtime initializes, the virtual address (VA) range of the application is partitioned into two areas: the CUDA-managed VA range and the OS-managed VA range. All CUDA-managed pointers are within this VA range, and the range will always fall within the first 40 bits of the process’s VA space.
Figure 2. CUDA VA Space Addressing
Subsequently, within the CUDA VA space, addresses can be subdivided into three types:
**GPU**
A page backed by GPU memory. This will not be accessible from the host and the VA in question will never have a physical backing on the host. Dereferencing a pointer to a GPU VA from the CPU will trigger a segfault.
**CPU**
A page backed by CPU memory. This will be accessible from both the host and the GPU at the same VA.
**FREE**
These VAs are reserved by CUDA for future allocations.
This partitioning allows the CUDA runtime to determine the physical location of a memory object by its pointer value within the reserved CUDA VA space.
Addresses are subdivided into these categories at page granularity; all memory within a page is of the same type. Note that GPU pages may not be the same size as CPU pages. The CPU pages are usually 4KB and the GPU pages on Kepler-class GPUs are 64KB. GPUDirect RDMA operates exclusively on GPU pages (created by `cudaMalloc()`) that are within this CUDA VA space.
### 4.2. Userspace API
**Data structures**
```c
typedef struct CUDA_POINTER_ATTRIBUTE_P2P_TOKENS_st {
unsigned long long p2pToken;
unsigned int vaSpaceToken;
} CUDA_POINTER_ATTRIBUTE_P2P_TOKENS;
```
**Function** `cuPointerSetAttribute()`
```c
CResult cuPointerSetAttribute(void *data, CUpointer_attribute attribute, CUdeviceptr pointer);
```
In GPUDirect RDMA scope, the interesting usage is when `CU_POINTER_ATTRIBUTE_SYNC_MEMOPS` is passed as the attribute:
```c
unsigned int flag = 1;
cuPointerSetAttribute(&flag, CU_POINTER_ATTRIBUTE_SYNC_MEMOPS, pointer);```
**Parameters**
**data [in]**
A pointer to a `unsigned int` variable containing a boolean value.
**attribute [in]**
In GPUDirect RDMA scope should always be `CU_POINTER_ATTRIBUTE_SYNC_MEMOPS`.
**pointer [in]**
A pointer.
**Returns**
- `CUDA_SUCCESS`
- if pointer points to GPU memory and the CUDA driver was able to set the new behavior for the whole device memory allocation.
- `anything else`
- if pointer points to CPU memory.
It is used to explicitly enable a strictly synchronizing behavior on the whole memory allocation pointed to by `pointer`, and by doing so disabling all data transfer optimizations which might create problems with concurrent RDMA and CUDA memory copy operations. This API has CUDA synchronizing behavior, so it should be considered expensive and possibly invoked only once per buffer.
Function `cuPointerGetAttribute()`
```c
CUresult cuPointerGetAttribute(const void *data, CUpointer_attribute attribute, CUdeviceptr pointer);
```
This function has two different attributes related to GPUDirect RDMA:
- `CU_POINTER_ATTRIBUTE_P2P_TOKENS` and `CU_POINTER_ATTRIBUTE_BUFFER_ID`.
⚠️ **WARNING: CU_POINTER_ATTRIBUTE_P2P_TOKENS has been deprecated in CUDA 6.0**
When `CU_POINTER_ATTRIBUTE_P2P_TOKENS` is passed as the attribute, `data` is a pointer to `CUDA_POINTER_ATTRIBUTE_P2P_TOKENS`:
```c
CUDA_POINTER_ATTRIBUTE_P2P_TOKENS tokens;
cuPointerGetAttribute(&tokens, CU_POINTER_ATTRIBUTE_P2P_TOKENS, pointer);
```
In this case, the function returns two tokens for use with the [Kernel API](#).
**Parameters**
- **data [out]**
- Struct `CUDA_POINTER_ATTRIBUTE_P2P_TOKENS` with the two tokens.
- **attribute [in]**
- In GPUDirect RDMA scope should always be `CU_POINTER_ATTRIBUTE_P2P_TOKENS`.
- **pointer [in]**
- A pointer.
**Returns**
- **CUDA_SUCCESS**
- if `pointer` points to GPU memory.
- **anything else**
- if `pointer` points to CPU memory.
This function may be called at any time, including before CUDA initialization, and it has CUDA synchronizing behavior, as in `CU_POINTER_ATTRIBUTE_SYNC_MEMOPS`, so it should be considered expensive and should be invoked only once per buffer.
Note that values set in `tokens` can be different for the same `pointer` value during a lifetime of a user-space program. See [Tokens Usage](#) for a concrete example.
Note that for security reasons the value set in `p2pToken` will be randomized, to prevent it from being guessed by an adversary.
In CUDA 6.0, a new attribute has been introduced that is useful to detect memory reallocations.
When `CU_POINTER_ATTRIBUTE_BUFFER_ID` is passed as the attribute, `data` is expected to point to a 64bit unsigned integer variable, like `uint64_t`.
```c
uint64_t buf_id;
cuPointerGetAttribute(&buf_id, CU_POINTER_ATTRIBUTE_BUFFER_ID, pointer);
```
Parameters
data [out]
A pointer to a 64 bits variable where the buffer id will be stored.
attribute [in]
The `CU_POINTER_ATTRIBUTE_BUFFER_ID` enumerator.
pointer [in]
A pointer to GPU memory.
Returns
CUDA_SUCCESS
if pointer points to GPU memory.
anything else
if pointer points to CPU memory.
Some general remarks follow:
- `cuPointerGetAttribute()` and `cuPointerSetAttribute()` are CUDA driver API functions only.
- In particular, `cuPointerGetAttribute()` is not equivalent to `cudaPointerGetAttributes()`, as the required functionality is only present in the former function. This in no way limits the scope where GPUDirect RDMA may be used as `cuPointerGetAttribute()` is compatible with the CUDA Runtime API.
- No runtime API equivalent to `cuPointerGetAttribute()` is provided. This is so as the additional overhead associated with the CUDA runtime API to driver API call sequence would introduce unneeded overhead and `cuPointerGetAttribute()` can be on the critical path, e.g. of communication libraries.
- Whenever possible, we suggest to combine multiple calls to `cuPointerGetAttribute()` by using `cuPointerGetAttributes`.
Function `cuPointerGetAttributes()`
```c
cUresult cuPointerGetAttributes(unsigned int numAttributes,
CUpointer_attribute *attributes, void **data, CUdeviceptr ptr);
```
This function can be used to inspect multiple attributes at once. The one most probably related to GPUDirect RDMA are `CU_POINTER_ATTRIBUTE_BUFFER_ID`, `CU_POINTER_ATTRIBUTE_MEMORY_TYPE` and `CU_POINTER_ATTRIBUTE_IS_MANAGED`.
### 4.3. Kernel API
The following declarations can be found in the `nv-p2p.h` header that is distributed in the NVIDIA Driver package. Please refer to the inline documentation contained in that header file for a detailed description of the parameters and the return values of the functions described below.
Preprocessor macros
NVIDIA_P2P_PAGE_TABLE_VERSION_COMPATIBLE() and NVIDIA_P2P_DMA_MAPPING_VERSION_COMPATIBLE() preprocessor macros are meant to be called by third-party device drivers to check for runtime binary compatibility.
Structure nvidia_p2p_page
```c
typedef struct nvidia_p2p_page {
uint64_t physical_address;
union nvidia_p2p_request_registers {
struct {
uint32_t wreqmb_h;
uint32_t rreqmb_h;
uint32_t rreqmb_0;
uint32_t reserved[3];
} fermi;
} registers;
} nvidia_p2p_page_t;
```
In the nvidia_p2p_page structure only the physical_address field is relevant to GPUDirect RDMA.
Structure nvidia_p2p_page_table
```c
typedef struct nvidia_p2p_page_table {
uint32_t version;
uint32_t page_size;
struct nvidia_p2p_page **pages;
uint32_t entries;
uint8_t *gpu_uuid;
} nvidia_p2p_page_table_t;
```
The version field of the page table should be checked by using NVIDIA_P2P_PAGE_TABLE_VERSION_COMPATIBLE() before accessing the other fields.
The page_size field is encoded according to the nvidia_p2p_page_size_type enum.
Structure nvidia_p2p_dma_mapping
```c
typedef struct nvidia_p2p_dma_mapping {
uint32_t version;
enum nvidia_p2p_page_size_type page_size_type;
uint32_t entries;
uint64_t *dma_addresses;
} nvidia_p2p_dma_mapping_t;
```
The version field of the dma mapping should be passed to NVIDIA_P2P_DMA_MAPPING_VERSION_COMPATIBLE() before accessing the other fields.
**Function** `nvidia_p2p_get_pages()`
```c
int nvidia_p2p_get_pages(uint64_t p2p_token, uint32_t va_space_token,
uint64_t virtual_address,
uint64_t length,
struct nvidia_p2p_page_table **page_table,
void (*free_callback)(void *data),
void *data);
```
This function makes the pages underlying a range of GPU virtual memory accessible to a third-party device.
**WARNING:** This is an expensive operation and should be performed as infrequently as possible - see Lazy Unpinning Optimization.
**Function** `nvidia_p2p_put_pages()`
```c
int nvidia_p2p_put_pages(uint64_t p2p_token, uint32_t va_space_token,
uint64_t virtual_address,
struct nvidia_p2p_page_table *page_table);
```
This function releases a set of pages previously made accessible to a third-party device. Warning: it is not meant to be called from within the `nvidia_p2p_get_pages()` callback.
**Function** `nvidia_p2p_free_page_table()`
```c
int nvidia_p2p_free_page_table(struct nvidia_p2p_page_table *page_table);
```
This function frees a third-party P2P page table and is meant to be invoked during the execution of the `nvidia_p2p_get_pages()` callback.
**Function** `nvidia_p2p_dma_map_pages()`
```c
int nvidia_p2p_dma_map_pages(struct pci_dev *peer,
struct nvidia_p2p_page_table *page_table,
struct nvidia_p2p_dma_mapping **dma_mapping);
```
This function makes the physical pages retrieved using `nvidia_p2p_get_pages()` accessible to a third-party device.
It is required on platforms where the I/O addresses of PCIe resources, used for PCIe peer-to-peer transactions, are different from the physical addresses used by the CPU to access those same resources.
On some platforms, this function relies on a correct implementation of the `dma_map_resource()` Linux kernel function.
Function `nvidia_p2p_dma_unmap_pages()`
```c
int nvidia_p2p_dma_unmap_pages(struct pci_dev *peer,
struct nvidia_p2p_page_table *page_table,
struct nvidia_p2p_dma_mapping *dma_mapping);
```
This function unmaps the physical pages previously mapped to the third-party device by `nvidia_p2p_dma_map_pages()`.
It is not meant to be called from within the `nvidia_p2p_get_pages()` invalidation callback.
Function `nvidia_p2p_free_dma_mapping()`
```c
int nvidia_p2p_free_dma_mapping(struct nvidia_p2p_dma_mapping *dma_mapping);
```
This function is meant to be called from within the `nvidia_p2p_get_pages()` invalidation callback.
Note that the deallocation of the I/O mappings may be deferred, for example after returning from the invalidation callback.
### 4.4. Porting to Tegra
GPUDirect RDMA is supported on Jetson AGX Xavier platform from CUDA 10.1 and on Drive AGX Xavier Linux based platforms from CUDA 11.2. From this point onwards, this document will collectively refer Jetson and Drive as Tegra. Owing to hardware and software specific divergence of Tegra vis-a-vis Linux-Desktop, already developed applications needs to be slightly modified in order to port them to Tegra. The following sub-sections (4.4.1-4.4.3) briefs over the necessary changes.
#### 4.4.1 Changing the allocator
GPUDirect RDMA on Desktop allows applications to operate exclusively on GPU pages allocated using `cudaMalloc()`. On Tegra, applications will have to change the memory allocator from `cudaMalloc()` to `cudaHostAlloc()`. Applications can either:
1. Treat the returned pointer as if it is a device pointer, provided that the iGPU supports UVA or `cudaDevAttrCanUseHostPointerForRegisteredMem` device attribute is a non-zero value when queried using `cudaDeviceGetAttribute()` for iGPU.
2. Get the device pointer corresponding to the host memory allocated using `cudaHostGetDevicePointer()`. Once the application has the device pointer, all the rules that are applicable to the standard GPUDirect solution also apply to Tegra.
#### 4.4.2 Modification to Kernel API
The declarations under Tegra API column of the following table can be found in the `nv-p2p.h` header that is distributed in the NVIDIA Driver package. Refer to the inline documentation contained in that header file for a detailed description of the parameters and the return values. The table below represents the Kernel API changes on Tegra vis-a-vis Desktop.
4.4.3 Other highlights
1. The length of the requested mapping and base address must be a multiple of 4KB, failing which leads to an error.
2. Unlike the Desktop version, callback registered at nvidia_p2p_get_pages() will always be triggered when nvidia_p2p_put_pages() is invoked. It is the responsibility of the kernel driver to free the page_table allocated by calling nvidia_p2p_free_page_table(). Note that, similar to the Desktop version, the callback will also triggered in scenarios explained in Unpin Callback.
3. Since cudaHostAlloc() can be allocated with cudaHostAllocWriteCombined flag or default flag, applications are expected to exercise caution when mapping the memory to userspace, for example using standard linux mmap(). In this regard:
a). When GPU memory is allocated as writecombined, the userspace mapping should also be done as writecombined by passing the vm_page_prot member of vm_area_struct to the standard linux interface: pgprot_writecombine().
b). When GPU memory is allocated as default, no modifications to the vm_page_prot member of vm_area_struct should be done.
Incompatible combination of map and allocation attributes will lead to undefined behavior.
Notice
This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation ("NVIDIA") makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer ("Terms of Sale"). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.
VESA DisplayPort
DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are trademarks owned by the Video Electronics Standards Association in the United States and other countries.
HDMI
HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.
OpenCL
OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.
Trademarks
NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.
Copyright
© 2007-2021 NVIDIA Corporation. All rights reserved.
|
{"Source-Url": "https://docs.nvidia.com/cuda/pdf/GPUDirect_RDMA.pdf", "len_cl100k_base": 10999, "olmocr-version": "0.1.49", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 56266, "total-output-tokens": 12506, "length": "2e13", "weborganizer": {"__label__adult": 0.0010347366333007812, "__label__art_design": 0.001071929931640625, "__label__crime_law": 0.0005898475646972656, "__label__education_jobs": 0.0005273818969726562, "__label__entertainment": 0.0003147125244140625, "__label__fashion_beauty": 0.0005340576171875, "__label__finance_business": 0.0004892349243164062, "__label__food_dining": 0.000629425048828125, "__label__games": 0.005062103271484375, "__label__hardware": 0.1624755859375, "__label__health": 0.0006442070007324219, "__label__history": 0.0004982948303222656, "__label__home_hobbies": 0.0003387928009033203, "__label__industrial": 0.0017824172973632812, "__label__literature": 0.00039124488830566406, "__label__politics": 0.0004379749298095703, "__label__religion": 0.00127410888671875, "__label__science_tech": 0.11517333984375, "__label__social_life": 7.12275505065918e-05, "__label__software": 0.03094482421875, "__label__software_dev": 0.67333984375, "__label__sports_fitness": 0.0008616447448730469, "__label__transportation": 0.0013456344604492188, "__label__travel": 0.0003426074981689453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50965, 0.01768]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50965, 0.37942]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50965, 0.81916]], "google_gemma-3-12b-it_contains_pii": [[0, 77, false], [77, 2998, null], [2998, 3258, null], [3258, 3258, null], [3258, 4522, null], [4522, 6822, null], [6822, 9402, null], [9402, 12213, null], [12213, 13146, null], [13146, 15318, null], [15318, 18355, null], [18355, 20426, null], [20426, 22774, null], [22774, 25296, null], [25296, 26883, null], [26883, 28212, null], [28212, 30129, null], [30129, 32297, null], [32297, 34365, null], [34365, 34518, null], [34518, 35707, null], [35707, 37631, null], [37631, 39599, null], [39599, 41451, null], [41451, 42957, null], [42957, 44949, null], [44949, 47439, null], [47439, 48643, null], [48643, 50965, null]], "google_gemma-3-12b-it_is_public_document": [[0, 77, true], [77, 2998, null], [2998, 3258, null], [3258, 3258, null], [3258, 4522, null], [4522, 6822, null], [6822, 9402, null], [9402, 12213, null], [12213, 13146, null], [13146, 15318, null], [15318, 18355, null], [18355, 20426, null], [20426, 22774, null], [22774, 25296, null], [25296, 26883, null], [26883, 28212, null], [28212, 30129, null], [30129, 32297, null], [32297, 34365, null], [34365, 34518, null], [34518, 35707, null], [35707, 37631, null], [37631, 39599, null], [39599, 41451, null], [41451, 42957, null], [42957, 44949, null], [44949, 47439, null], [47439, 48643, null], [48643, 50965, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50965, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50965, null]], "pdf_page_numbers": [[0, 77, 1], [77, 2998, 2], [2998, 3258, 3], [3258, 3258, 4], [3258, 4522, 5], [4522, 6822, 6], [6822, 9402, 7], [9402, 12213, 8], [12213, 13146, 9], [13146, 15318, 10], [15318, 18355, 11], [18355, 20426, 12], [20426, 22774, 13], [22774, 25296, 14], [25296, 26883, 15], [26883, 28212, 16], [28212, 30129, 17], [30129, 32297, 18], [32297, 34365, 19], [34365, 34518, 20], [34518, 35707, 21], [35707, 37631, 22], [37631, 39599, 23], [39599, 41451, 24], [41451, 42957, 25], [42957, 44949, 26], [44949, 47439, 27], [47439, 48643, 28], [48643, 50965, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50965, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
a47b383b1039f2e690db182bef9b408e2fea4333
|
[REMOVED]
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a447972.pdf", "len_cl100k_base": 8805, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 42824, "total-output-tokens": 10595, "length": "2e13", "weborganizer": {"__label__adult": 0.0002944469451904297, "__label__art_design": 0.00039076805114746094, "__label__crime_law": 0.00029468536376953125, "__label__education_jobs": 0.0005745887756347656, "__label__entertainment": 7.730722427368164e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.0002810955047607422, "__label__food_dining": 0.00031447410583496094, "__label__games": 0.0003948211669921875, "__label__hardware": 0.0006818771362304688, "__label__health": 0.0005183219909667969, "__label__history": 0.00023162364959716797, "__label__home_hobbies": 8.159875869750977e-05, "__label__industrial": 0.00034737586975097656, "__label__literature": 0.0003132820129394531, "__label__politics": 0.00025391578674316406, "__label__religion": 0.00037932395935058594, "__label__science_tech": 0.040557861328125, "__label__social_life": 9.930133819580078e-05, "__label__software": 0.0115509033203125, "__label__software_dev": 0.94140625, "__label__sports_fitness": 0.00021731853485107425, "__label__transportation": 0.0004301071166992187, "__label__travel": 0.00017750263214111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42394, 0.01583]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42394, 0.25783]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42394, 0.89757]], "google_gemma-3-12b-it_contains_pii": [[0, 2445, false], [2445, 2546, null], [2546, 5086, null], [5086, 7753, null], [7753, 10320, null], [10320, 13043, null], [13043, 15870, null], [15870, 18462, null], [18462, 20975, null], [20975, 23757, null], [23757, 27087, null], [27087, 30147, null], [30147, 33174, null], [33174, 34543, null], [34543, 37492, null], [37492, 40214, null], [40214, 42394, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2445, true], [2445, 2546, null], [2546, 5086, null], [5086, 7753, null], [7753, 10320, null], [10320, 13043, null], [13043, 15870, null], [15870, 18462, null], [18462, 20975, null], [20975, 23757, null], [23757, 27087, null], [27087, 30147, null], [30147, 33174, null], [33174, 34543, null], [34543, 37492, null], [37492, 40214, null], [40214, 42394, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42394, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42394, null]], "pdf_page_numbers": [[0, 2445, 1], [2445, 2546, 2], [2546, 5086, 3], [5086, 7753, 4], [7753, 10320, 5], [10320, 13043, 6], [13043, 15870, 7], [15870, 18462, 8], [18462, 20975, 9], [20975, 23757, 10], [23757, 27087, 11], [27087, 30147, 12], [30147, 33174, 13], [33174, 34543, 14], [34543, 37492, 15], [37492, 40214, 16], [40214, 42394, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42394, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
c5c9995dd7c4277b5accf36d75255e6faf56385d
|
A Two-Phase Online Prediction Approach for Accurate and Timely Adaptation Decision
Chen Wang, Jean-Louis Pazat
To cite this version:
Chen Wang, Jean-Louis Pazat. A Two-Phase Online Prediction Approach for Accurate and Timely Adaptation Decision. International Conference on Service Computing, IEEE, Jun 2012, hounolulu, Hawaii, United States. hal-00705289
HAL Id: hal-00705289
https://inria.hal.science/hal-00705289
Submitted on 7 Jun 2012
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Two-Phase Online Prediction Approach for Accurate and Timely Adaptation Decision
Chen Wang and Jean-Louis Pazat
IRISA/INRIA, Campus de Beaulieu, 35042, Rennes Cedex, France
Email: {chen.wang, jean-louis.pazat}@inria.fr
Abstract—A Service-Based Application (SBA) is built by defining a workflow that composes and coordinates different Web services available via the Internet. In the context of on-demand SBA execution, suitable services are selected and integrated at runtime to meet different non-functional requirements (such as price and execution time). In such dynamic and distributed environment, an important issue is to guarantee the end-to-end Quality of Service (QoS). As a consequence, SBA provider is required to monitor each running SBA instance, analyze its runtime execution states, then identify proper adaptation plans if necessary, and finally apply the relative countermeasures. One of the main challenges is to accurately trigger the adaptation process as early as possible.
In this paper, we present a two-phase decision approach that can accurately analyze the adaptation needs for on-demand SBA execution model. Our approach is based on the online prediction techniques: an adaptation decision is determined by predicting an upcoming end-to-end QoS degradation through two-phase evaluations. Firstly, the end-to-end QoS is estimated at runtime based on monitoring techniques; if a QoS degradation is tent to happen, in the second phase, both static and adaptive strategies are introduced to assess whether it is the best timing to draw the final adaptation decision. Our approach is evaluated and validated by a series of realistic simulations.
Keywords—SBA; adaptation decision; SLA; quality prediction; classification;
I. INTRODUCTION
Service-Oriented Architecture (SOA) is adopted today by many enterprises as a flexible solution for building Service-Based Applications (SBA). An SBA is a dynamic and distributed software system that integrates and coordinates a set of third-party Web services (named as the constituent services) accessible over the Internet. In this scenario, the SBA's end-to-end Quality of Service (QoS) is determined by the QoS of all constituent services. For example, the execution time of an SBA instance depends on how fast each constituent service responds. However, in this loosely coupled environment, the execution of SBA may fail, or fail to meet the required quality level: a constituent service may take longer time to respond due to the network congestion; moreover, infrastructure failures can cause a service completely responseless.
In this context, Service Level Agreement (SLA) plays an important role as a guarantee of non-functional quality. SLA is a mutually-agreed contract between service requester and provider, which dictates the expectations as well as obligations in regards to how a service is expected to be provided: on one hand, the expected quality level is formulated by specifying agreed target values for a collection of QoS attributes; on the other hand, some penalty measures are defined in case of failing to meet these quality expectations. SLA violations can lead to some undesirable results, such as reputation degradation and penalty payment. To prevent SLA violation, it is essential for SBA providers to guarantee the end-to-end QoS by taking adaptation countermeasures if needed.
One of the key challenges is to determine the need for adaptation in order to draw accurate adaptation decisions. All existing approaches can be classified into two categories: offline analysis [1], [2], [3] and online prediction [4], [5], [6]. Offline approaches can decide when and how to improve the end-to-end quality of SBA by reasoning the causes of the past SLA violations. The adaptation aims at preventing SLA violations for the future executions rather than the ongoing ones. By contrast, online approaches predict an upcoming SLA violation, and thereby decide to trigger preventive adaptation in order to improve the QoS of the running execution instance before the SLA violation really happens.
This paper investigates an online prediction approach for on-demand executions of SBA. In this context, in response to different non-functional preferences and constraints (e.g. price and execution time), each constituent service is selected at runtime from a set of functional-equivalent candidates with different QoS [7]. As a result, any two distinct executions are instantiated with different configurations, defined as both local and global QoS expectations. Therefore, it is challenging to accurately predict potential SLA violations for all running SBA instances with irrelevant configurations.
In this paper, we introduce a two-phase online prediction approach to decide the best adaptation timing for on-demand executions of SBA. The first phase suspects an SLA violation by predicting an upcoming end-to-end QoS degradation; the prediction is based on the monitoring techniques and the estimation on the future execution. Later in the second phase, both static and adaptive strategies are introduced to predict the reliability of the suspicion in order to decide whether it is necessary to trigger preventive adaptations. Our approach is evaluated by a series of realistic simulations, the results show that our approach can determine both accurate and timely runtime adaptation decisions for on-demand SBA executions.
The rest of paper is organized as follows: Section II provides the background of our problem. In Section III, some of the main existing decision approaches are discussed and our approach is introduced. Later, Section IV and Section V
present respectively the two phases of our approach in depth. In Section VI, the performance of our approach is studied based on a set of realistic simulations. Finally, the conclusion and the future work are addressed in Section VII.
II. BACKGROUND
A. Illustrative Example
In this paper, we use a travel agency as an illustrative example to present our approach. As shown in Figure 1, an SBA is implemented to propose traveling plans for its clients. A request composes a set of parameters such as information about the origin and the destination. In order to respond to client’s requests, a workflow is defined to coordinate a collection of interrelated tasks: firstly, the client’s identity is verified by task $t_1$ and the inputs from the requester are validated and analyzed by task $t_2$. Then, the execution is diverged into two parallel execution branches: task $t_3$ and task $t_4$ look for the round-trip flight tickets from all airline companies. Concurrently, task $t_5$ searches available hotels in the destination city, and the weather information is provided by task $t_6$. Both branches converge before task $t_7$, which generates all propositions of traveling plans based on the client’s criterion, such as total budget. Finally, task $t_8$ sorts all the propositions according to the client’s preferences and returns the results.
The SBA requires service collaboration across enterprise boundaries: each task can be bound to either an internal service (e.g. task $t_1, t_2, t_7, t_8$) or an external one provided by the third-party enterprises (e.g. task $t_3, t_4, t_5, t_6$). Constituent services are on demand selected at runtime: as an example, for premium clients, fast services are selected and bound in order to return the propositions of traveling plans as quickly as possible; on the other side, for normal clients, cheaper and slower services are used and the execution may take respectively longer time.
B. Global and Local SLA
As introduced, the SLA defines both aspects of expected QoS and penalty measures. First of all, the definition and execution of penalty is beyond the interests of this paper. Furthermore, we assume that a set of QoS attributes can be considered as deterministic: their values cannot be changed at runtime once negotiated (e.g. price). Accordingly, the SLA violation can be only caused by non-deterministic QoS attributes, whose real values are affected by the distributed and dynamic runtime environment, such as response time. For the sake of simplicity, an SLA is therefore modeled as a set of target values for only non-deterministic QoS attributes. As a proof on concept, our discussion will only focus on the response time in the remainder of this paper.
The SBA plays the roles of service provider as well as consumer. For both roles, it negotiates an SLA with each of its counterparts, as shown in Figure 1: the SLA between the SBA and each constituent service is defined as local SLA, and the one negotiated with the SBA requester is defined as global SLA. A local SLA reflects the expected time consumption for executing the corresponding task $t_i$, denoted as $slat(t_i)=<q_i(t_i)>$, whereas the global SLA dictates the expected end-to-end execution time of SBA, denoted as $gsla=<g_{c_i}>$. Obviously, the definition of the global SLA depends on the local ones. As an example, Table 1 lists the expected execution time defined in both global and local SLAs for an instance of the illustrative example shown in Figure 1.
C. Prevention of Global SLA Violation
In order to enhance the reputation and to avoid penalties, it is mandatory for service providers to prevent global SLA violation for every running execution instance. The general solution for runtime prevention of SLA violation is to implement the MAPE control-feedback loop (Monitor-Analyze-Plan-Execute) [8], as depicted in Figure 2: 1) Monitor: firstly, each execution instance of SBA is monitored by intercepting communication messages in order to collect a series of events; 2) Analyze: these events are used to evaluate the quality state of a running execution instance and to analyze the need for adaptation; 3) Plan: once an adaptation decision is determined, a suitable adaptation plan (e.g. a list of actions to improve the end-to-end quality) is identified; 4) Execute: finally the relative countermeasures are applied to the ongoing SBA instance.
D. Problem Definition
One of the key challenges to efficiently implement the MAPE loop is to accurately draw adaptation decision (Analyze step). Our research work studies an online prediction approach to analyze the need for adaptation by forecasting whether the SLA is tent to be violated in the future. An eligible online prediction approach has to meet the following requirements (challenges): 1) Effectiveness. An effective approach can successfully predict as many SLA violations as possible. 2) Precision. An effective approach might not be precise due to many false predictions, which will lead to unnecessary adaptations and thereby bring additional cost and complexity: on one hand, runtime adaptation is costly since more resources
are required to identify and to execute an adaptation plan; on the other hand, the time consumption of a runtime adaptation process may potentially delay the execution of SBA. 3) Timing. It is desirable to decide as early as possible: late decisions are usually precise but less useful, since the best adaptation opportunities might be missed, and the benefit of preventive adaptation is diminished. 4) Efficiency. The decision algorithm must be efficient (fast decision) in order to meet the critical time constraint at runtime.
III. RELATED WORK AND MOTIVATION
A. Related Work
The existing online prediction approaches in the literature can forecast SLA violations caused by either functional failures or non-functional deviations. In the former case, the recovery from functional failures requires extra execution time and additional cost, which can lead to global SLA violations. Some research work use online testing techniques to test all constituent services in parallel to the execution of an SBA instance. By this means, an upcoming functional failure can be forecasted before its real occurrence. [9] presents the PROSA framework, which defines key activities to initiate online testing either on the binding level or on the service composition level, and thereby proactively triggers the adaptation process. [10] investigates how to guarantee functional correctness of conversational services. The authors propose a novel approach to enables proactive adaptations through just-in-time testing. Online testing approach is helpful to detect potential functional failures but it can hardly be aware of the deviation of the end-to-end QoS. Furthermore, it requires each consistent service to provide a test mode (e.g. free interfaces for testing).
Our research work belongs to the latter case, which predicts SLA violation by asserting non-functional deviations that might happen at the end of the execution (e.g. delay). Some research work use runtime verification techniques to determine the necessity of adaptation and to trigger preventive adaptation. In [6], the authors introduce SPADE approach: after the execution of each task, if the local SLA is violated, SPADE uses both monitored data and the assumptions to verify whether the global SLA can be still satisfied. If it reveals that the global SLA is tent to be violated, the adaptation is accordingly triggered. However, early verifications are largely based on the assumptions rather than monitored data, thus they are inaccurate and can lead to many unnecessary adaptations.
Other research work use machine learning techniques in order to provide precise predictions and avoid unnecessary adaptations. In [4], a set of concrete points are defined in the workflow as checkpoints. Each checkpoint is associated with a predictor, which is implemented by a regression classifier. When the execution of the workflow reaches to a checkpoint, the corresponding predictor is activated and uses the knowledge learned from past executions to predict whether the global SLA will be violated. This work is extended in [5] by proposing the PREvent framework, which integrates event-based monitoring, runtime prediction of SLA violations and automated runtime adaptations. Such checkpoint-based prediction approach has some limitations: firstly, some misbehavior (e.g. huge delay) between two checkpoints cannot be handled in time. In the meantime, the best adaptation opportunity might be lost. Furthermore, poorly selected checkpoint(s) may lead to undesirable results, such as unnecessary adaptations. But the selection of optimal checkpoints is complicated and challenging, especially for complex workflows.
B. Our Approach: Two-Phase Decision Approach
In order to provide both accurate and timely predictions of SLA violation, we propose a two-phase online prediction approach. As shown in Figure 3, an adaptation decision is determined through the following steps: 1) listen to the events emitted from the Monitor component (refer to Figure 2), once the execution of a task $t_i$ is completed, go to step 2. 2) If the execution of workflow has not yet been finished, go to step 3; otherwise, the algorithm is terminated with the state silence, which means that during the entire execution, no SLA violation has been predicted (no adaptation decision has been made). 3) Estimate the values of QoS attributes defined in the global SLA (e.g. execution time) based on the monitored data and the estimation of the execution in the future. 4) Compare the estimated value with the target value defined in the global SLA, if a violation is tent to happen, a suspicion of SLA violation is reported and go to step 5; otherwise, return to step 1. 5) Evaluate the trustworthy level of the suspicion in order to decide whether to accept or to neglect this suspicion. 6) If the suspicion is accepted, our approach terminates with the state warning by predicting an upcoming SLA violation and drawing the adaptation decision; otherwise, the suspicion is neglected and go back to step 1.
The core of our approach is two-phase evaluations: the estimation phase (step 3, 4) evaluates whether the global SLA is tent to be violated and the decision phase (step 5, 6) evaluates how likely the suspected violation will really happen. An additional evaluation can bring more precise adaptation decisions since all inaccurate early suspicions can be neglected in the decision phase. Additionally, without the limitation of the predefined checkpoints, it is possible to react to any misbehavior in time. In the following, Section IV and Section V will use the execution time as an example to highlight respectively these two phases.
IV. ESTIMATION PHASE
To estimate the end-to-end execution time of a running SBA instance, two kinds of information are required: 1) the measured execution time of the tasks whose executions have already been completed; 2) the probable time consumption of...
uncompleted tasks, including the tasks that are being executed as well as the ones whose executions have not been started yet. Based on the monitoring techniques [5], we assume that the former information is known at the time of estimation, which is accessible from an internal database. In this section, we firstly highlight how to estimate local execution time for uncompleted tasks, and then we introduce an efficient tool for rapid estimation of the global execution time.
A. Estimation of Local Execution Time
The local execution time of a task $t_i$ depends on how fast the corresponding constituent service $S_B(t_i)$ responds. As a result, some research work [5] propose to use arithmetic mean value of the last $n$ measured response time of $S_B(t_i)$ as the estimation of the local execution time for $t_i$, denoted as $q_E(t_i)$. However, the performance of this method is affected by the outliers. Suppose that the last 10 measures of response time are: 940ms, 1,020ms, 1,050ms, 1,000ms, 970ms, 1,100ms, 1,020ms, 24,060ms, 960ms, 980ms. The arithmetic mean is 3,310ms, which cannot properly reflect the probable response time of $S_B(t_i)$. In addition, this method cannot be used when there is no (sufficient) historical information. For example, in the context of on-demand SBA execution, a task may bind to a service that has never been invoked before.
We provide both dynamic and static methods for local estimation. To estimate the response time of $S_B(t_i)$, SBA provider is required to record the response time of all constituent services that have been invoked before. Dynamic methods look into the past records for the information about $S_B(t_i)$. If enough historical information is found, the approaches presented in [11] can be used to firstly detect and remove the outliers before computing the arithmetic mean. A more efficient alternative solution is to directly use the median of the last $n$ measures. In the previous example, the median value is 1 000ms: five measures are less or equal to it and five are greater, including the outlier. In case of no (sufficient) historical information about $S_B(t_i)$, the static method is automatically activated, which uses the target value defined in the local SLA as the estimation ($q_E(t_i) = q_l(t_i)$). In this case, it is straightforward to trust the service provider.
B. Estimation of Global Execution Time
Having the runtime knowledge on the local execution time of each task $t_i$, defined as $q_L(t_i)$, the global execution time of a running SBA instance can be estimated by using aggregation functions [12] ($q_L(t_i)$ equals to the measured execution time for completed tasks and equals to $q_E(t_i)$ for uncompleted ones). However, runtime aggregation is costly and time-consuming, especially for complex and unstructured workflows. We introduce the Program Evaluation and Review Technique (PERT) [13] as an efficient tool for rapid runtime estimation of global execution time. PERT was originally developed for planning, monitoring and managing the progress of complex projects. In our approach, PERT is used to manage the workflow execution and to facilitate decision making. It maintains some additional information for each task $t_i$:
- The Expected Start Time (EST): $T_E(t_i)$. This is the expected time by which the execution of task $t_i$ can start. $T_E(t_i) = \max \{ T_E(t_j) + q_L(t_j) | t_j \text{ is the direct predecessor of } t_i \}$. For the first task (start) $t_s$, $T_E(t_s) = 0$.
- The Latest Finish Time (LFT): $T_L(t_i)$. This is the latest time by which the task $t_i$ can finish without causing the delay of the ongoing execution instance. $T_L(t_i) = \min \{ T_L(t_k) - q_L(t_k) | t_k \text{ is the direct successor of } t_i \}$. For the last task (end) $t_e$, $T_E(t_e) = gc_t$, which is the target value defined in the global SLA.
- The slack time: $S(t_i)$. This is the maximum delay for executing the task $t_i$: $S(t_i) = T_L(t_i) - T_E(t_i) - q_L(t_i)$.
- The critical path $CP$. A path is a sequential execution of tasks from the beginning to the end of a workflow. CP is a path with all tasks having the least slack time.
The PERT chart is initially constructed based on both global and local SLAs ($q_L(t_i)=q_l(t_i)$), all the information is then presented on an X-Y chart. As an example, for the execution instance of the illustrative example with all local and global SLAs given in Table I, the corresponding PERT chart is computed and depicted in Figure 4. The X-axis represents the accumulated execution time, while the Y-axis is the list of tasks that are composed in the workflow. Each task $t_i$ is represented by a single horizontal bar which starts from $T_E(t_i)$ and ends with $T_L(t_i)$. The length of the bar corresponds to the maximum acceptable duration for executing a task, under the assumption that all the other tasks complete on schedule. Anyhow, the reconstruction of chart can be performed in parallel with the service invocations, thus it will not bring extra time consumption.
In order to reflect the most up-to-date runtime execution state, the PERT chart can be updated when new local knowledge $q_L(t_i)$ is available (e.g. newly measured execution time of a just-completed task). Please note that the PERT chart only needs to be partially reconstructed by recomputing some of the above-mentioned information (e.g. the completed tasks do not need to be recomputed), which can be seen as the adjustments of some bars along the X-axis. The update can be triggered either periodically, or based on checkpoints or events (e.g. after each “invoke” event). Anyhow, the reconstruction of chart can be performed in parallel with the service invocations, thus it will not bring extra time consumption.
By using the PERT chart, the estimation of global execution time is simplified. When a task $t_i$ is finished, the real accumulated execution time at this moment can be measured, denoted as $AT(t_i)$. If $AT(t_i) > T_L(t_i)$, the global execution...
time is estimated to be violated by \( AT(t_i) - T_L(t_i) \), and a global SLA violation is accordingly suspected. A suspicion of violation is represented by a pair, denoted as \( S = \langle T_D, t_i \rangle \), which means that after executing task \( t_i \), a delay of \( T_D \) is estimated \( (T_D = AT(t_i) - T_L(t_i)) \). In this way, instead of running complex aggregation function, the estimation of global SLA requires only one comparison operation.
V. DECISION PHASE
In this section, both static and adaptive decision strategies are introduced to evaluate the trustworthy level of a suspicion in order to identify the need for the preventive adaptation.
A. Weight of a Suspicion
In the context of on-demand SBA execution, two suspicions with the same attributes \((T_D \text{ and } t_i)\) reported by two distinct SBA instances may have different significances. With different configurations, “after task \( t_i \)” refers to the different degrees on the completion of workflow execution. Using our illustrative example, \( S_1 \) and \( S_2 \) are both reported after task \( t_7 \) with the same estimated delay of 350ms; but for \( S_1 \), 90% of the global expected execution time \((g_c_7)\) is consumed after executing \( t_7 \), whereas only 60% for \( S_2 \). Obviously, \( S_1 \) is stronger than \( S_2 \) since a delay is more likely to happen in the end.
In order to model the significance of a suspicion \( S \) for on-demand executions of SBA, we introduce the concept of weight, denoted as \( \tau_s \). \( \tau_s \) is defined by a decimal between 0 and 1: greater value indicates a stronger suspicion. An earlier suspicion is often less accurate, because it is based more on the estimated values than on the measured values; accordingly it has a lower weight. Along with the execution, since more and more local estimations are updated with measured values, a later suspicion becomes more accurate and has a respectively higher weight. In our approach, the weight of a suspicion \( S \) is expressed by the percentage of execution that is supposed to be completed at the moment, defined as \( \tau_s(S) = T_L(t_i)/g_c_i \), remember that \( T_L(t_i) \) denotes the LFT of task \( t_i \) and \( g_c_i \) is the expected global execution time defined in the global SLA.
We provide three static strategies and an adaptive strategy to evaluate a suspicion \( S \) using the estimated delay \( T_D \) and its weight \( \tau_s(S) \). Static strategies compute the maximum allowed delay \( \text{max}_d \) with respect to \( \tau_s(S) \) using predefined evaluation functions. If \( T_D \) is greater than \( \text{max}_d \), the suspicion is considered as strong enough and an adaptation decision is determined; otherwise, it is neglected. By contrast, adaptive strategy uses machine learning techniques to build a classifier that learns the knowledge from the past suspicions in order to predict whether or not the current suspicion will really happen.
B. Static Decision Strategies
1) Qualitative strategy: The idea is to neglect all the early suspicions since they are considered as inaccurate. By specifying a weight threshold \( \rho_w \), only the suspicions with \( \tau_s > \rho_w \) will be accepted to report a warning of violation. Thus, its evaluation function is a step function, as depicted by \( f_1 \) in Figure 5: \( \text{max}_d \) equals to \( +\infty \) if \( \tau_s \) is smaller than \( \rho_w \) and equals to 0 otherwise. Figure 5 also gives three sample suspicions with different weights and estimated delays: using qualitative strategy, \( S_1 \) will be neglected whereas both \( S_2 \) and \( S_3 \) will be accepted.
2) Quantitative strategy: The main limitation of qualitative strategy is that the adaptation cannot be triggered until a certain percentage of workflow has been executed, despite the fact that a huge delay may arise at the beginning of the execution (e.g. \( S_1 \)). In order to react to such problem as early as possible, quantitative strategy evaluates a suspicion with the consideration of both \( \tau_s(S) \) and \( T_D \). In this case, the SBA provider specifies an evaluation function based on his/her own experience, such as \( f_2 \) defined in Figure 5. Using \( f_2 \), \( S_1 \) and \( S_2 \) will lead to adaptation decisions while \( S_3 \) will not.
3) Hybrid strategy: The quantitative strategy may not detect slight delays when the execution is approaching to the end, such as \( S_3 \). In this case, a slight delay might finally lead to a great penalty due to SLA violation. The hybrid strategy is more critical: by specifying \( \rho_w \), if \( \tau_s(S) > \rho_w \), the quantitative strategy is applied (\( S \) will be absolutely accepted); otherwise, the quantitative strategy is applied. Thus, its evaluation function follows \( f_2 \) when \( \tau_s < \rho_w \) and follows \( f_3 \) (\( \text{max}_d = 0 \)) otherwise. By using hybrid strategy, all three suspicions in Figure 5 will be accepted.
C. Adaptive Decision Strategy
Static decision strategies are useful when insufficient historical information is available. However, from the long-run perspective, it has the following limitations: first of all, it is a challenging task for the SBA provider to manually identify a suitable evaluation function based on his/her past experiences: sometimes such experience is hard to be expressed using a regular function. Additionally, once defined, the evaluation function cannot be self-adjusted (can be manually modified by SBA provider) in order to improve the quality of decision.
The adaptive strategy models adaptation decision as a classification problem. The correctness of a suspicion \((C_S)\) can be evaluated when the relative execution terminates (suppose no adaptation action is really executed): if the global SLA is really violated, the suspicion is proved as correct \((C_S=\text{true})\); otherwise, it is marked as a false one \((C_S=\text{false})\). As a result, at the end of each execution, a set of suspicion records can be created based on all reported suspicions and their correctness. A suspicion record is described by two numeric attributes (estimated delay and its corresponding weight) and a categorical attribute defined as \( \text{class} \) (can be true or false).
denoted as $S_R = \langle \tau_D, \tau_s(S), C_S \rangle$. All historical suspicion records are organized into a training dataset, as illustrated in Figure 6. The dataset is often depicted as a table, with each row representing a suspicion record. Based on machine learning technique [14], a classifier is built to progressively learn the knowledge from past experiences. The knowledge is a specific algorithm that determines the class of a new suspicion based on its attributes. Once a suspicion is reported and it is classified as correct ($C_S$=true), the adaptation decision is accordingly made; otherwise, this suspicion is neglected.
Since the space is limited, we will not provide the details about data cleaning as well as how to learn and represent the knowledge. Interested readers can refer to [14], [15]. The classifier is required to be retrained to improve the prediction quality. The retraining can be carried out in one of the following ways: 1) after every $N$ predictions, 2) periodically (after a fixed duration), 3) on-demand by the SBA provider.
VI. EXPERIMENTAL RESULTS
A. EXPERIMENTAL SETUP
Our approach is evaluated and validated by a set of experiments built on a realistic simulation model, since real implementation is costly, which requires to implement the entire MAPE control loop as well as dealing with some other challenging problems of on-demand SBA execution, such as service selection, or interface mismatches.
1) Realistic simulation model: For each constituent service, we are only interested in how it responds rather than what it responds. Therefore, instead of invoking real-world Web services, each task is bound to a virtual service, which only simulates the non-functional aspects of a service invocation (e.g., response time). In our experiments, 100 virtual services are created based on the realistic QoS datasets provided by [16], which record the non-functional performances (such as response time, throughput, etc.) of a large number of real-world service invocations. A virtual service collects all the invocation records from the same requester to the same Web service. By using these records, each virtual service defines two methods: 1) simulate() randomly selects one of the past records to simulate the non-functional aspects of an invocation; 2) getExpectedRT() determines the expected response time by specifying a percentage threshold $\phi$, which indicates the percentage of past invocations which can respond within the expected value. In our experiments, in order to create a scenario with high violation rate, $\phi$ is set to 0.6 (40% possibility of local SLA violations for each virtual service).
2) Simulate an execution of SBA: An execution of SBA is simulated through three stages: in the first stage, an SBA instance is created by binding each task $t_i$ to a randomly selected virtual service, denoted as $vs(t_i)$, and the relative local SLA is generated by completing a template with the expected response time of $vs(t_i)$. Then, the global SLA is generated by computing the expected end-to-end execution time using aggregation functions [12]. Finally, based on both local and global SLAs, a PERT chart is constructed.
The second stage simulates the execution of this SBA instance. First of all, each selected virtual service $vs(t_i)$ simulates the response time as the real execution time of task $t_i$. Then, by running aggregation functions along all execution paths, the real accumulated execution time by which the execution of task $t_i$ is completed can be computed, denoted as $AT(t_i)$. Next, a collection of “receive” events are created with the corresponding timestamp, denoted as $Recv = \langle t_i, AT(t_i) \rangle$.
Finally in the third stage, these “receive” events are sorted by the timestamp and then sequentially processed: firstly, if $AT(t_i) > TL(t_i)$, a set of predictors are activated to make adaptation decision based on different strategies. A predictor is a Java object that implements a specific decision strategy. After the prediction, the PERT chart is updated and the static method is used for estimations of local execution time.
B. EVALUATION METRICS
Contingency table metrics [17] are used to investigate how accurately a decision approach works. An adaptation decision approach can terminate with two possible states: warning or silence (refer to Figure 3). Using the contingency table, as shown in Table II, a warning is defined as a positive decision (P), which asserts that the global SLA will be violated in the near future; by contrast, a silence is formally named as a negative decision (N) which decides that no adaptation was needed for the entire duration of the execution. In order to evaluate the quality of decision, no adaptation plan is really identified and executed. For a positive decision, if a violation is really occurred in the end, it is proved to be a true positive (TP); otherwise, it is a false positive (FP). Similarly, a negative decision can be either a true negative (TN) if no violation really happens at the end of execution, or else a false negative (FN). Based on the contingency table, different evaluation metrics are defined as follows:
- **Accuracy ($a$)**. It is the ratio of all correct decisions to the number of all decisions: $a = \frac{TP + TN}{TP + FN + FP + TN}$.
- **Precision ($p$)**. It is the ratio of all correct warnings to the number of all warnings, $p = \frac{TP}{TP + FP}$.
- **Effectiveness ($e$)**. It is the ratio of all correct silences to the number of all silences: $e = \frac{TN}{PN + FN}$.
- **Decision Time ($dt$)**. Only positive decisions have $dt$. It is measured by the maximum number of tasks that have already been completed on different execution paths.
<table>
<thead>
<tr>
<th>TABLE II</th>
<th>CONTINGENCY TABLE</th>
</tr>
</thead>
<tbody>
<tr>
<td>Prediction: violated</td>
<td>Real: violated</td>
</tr>
<tr>
<td>(warning)</td>
<td>True Positive (TP)</td>
</tr>
<tr>
<td>(correct warning)</td>
<td>(false warning)</td>
</tr>
<tr>
<td>Prediction: not violated</td>
<td>False Negative (FN)</td>
</tr>
<tr>
<td>(silence)</td>
<td>(false silence)</td>
</tr>
<tr>
<td>Sum</td>
<td>Violations (V)</td>
</tr>
</tbody>
</table>
TABLE III
EXPERIMENTAL RESULTS: EVALUATION OF DECISION TIMING
<table>
<thead>
<tr>
<th>Metrics</th>
<th>P1</th>
<th>P2</th>
<th>P3</th>
<th>P4</th>
<th>P5</th>
<th>P6</th>
<th>P7</th>
<th>P8</th>
<th>P9</th>
<th>P10</th>
</tr>
</thead>
<tbody>
<tr>
<td>TP</td>
<td>191</td>
<td>243</td>
<td>171</td>
<td>211</td>
<td>165</td>
<td>191</td>
<td>397</td>
<td>441</td>
<td>406</td>
<td>359</td>
</tr>
<tr>
<td>FN</td>
<td>250</td>
<td>198</td>
<td>270</td>
<td>230</td>
<td>276</td>
<td>250</td>
<td>44</td>
<td>35</td>
<td>38</td>
<td>79</td>
</tr>
<tr>
<td>P</td>
<td>0.59</td>
<td>0.66</td>
<td>0.67</td>
<td>0.73</td>
<td>0.66</td>
<td>0.71</td>
<td>0.92</td>
<td>1.0</td>
<td>0.8</td>
<td>0.84</td>
</tr>
<tr>
<td>e</td>
<td>0.43</td>
<td>0.55</td>
<td>0.39</td>
<td>0.48</td>
<td>0.37</td>
<td>0.43</td>
<td>0.90</td>
<td>1.0</td>
<td>0.92</td>
<td>0.81</td>
</tr>
<tr>
<td>p</td>
<td>0.72</td>
<td>0.74</td>
<td>0.90</td>
<td>0.92</td>
<td>0.89</td>
<td>0.93</td>
<td>1.0</td>
<td>0.71</td>
<td>0.86</td>
<td>0.40</td>
</tr>
<tr>
<td>dt</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>2.95</td>
<td>4.0</td>
</tr>
</tbody>
</table>
C. Experiment 1: Evaluation of the Best Decision Time
In this experiment, the best decision time is evaluated by disabling the decision phase, thereby the adaptation decision is determined once an SLA violation is suspected. The evaluation is based on the illustrative example defined in Figure 1: after each task $t_i$ ($1 \leq i \leq 8$), a checkpoint $C_i$ is defined as a possible decision time point. 8 predictors are created based on a single checkpoint: predictor $P_i$ ($1 \leq i \leq 8$) can decide only when the execution of workflow reaches to checkpoint $C_i$. Please note that the decisions of $P_8$ must be definitely accurate because it decides after the completion of workflow execution. In addition, two predictors are defined based on multiple checkpoints: $P_9$ is activated at both $C_2$ and $C_7$ while $P_{10}$ can decide at either $C_4$ or $C_6$. We run 1000 simulations, Table III summarizes the performance of all predictors.
1) Decisions based on a single checkpoint: The results reveal that, by using a single checkpoint, it is hard to draw both early and accurate adaptation decision. First of all, as we discussed in this paper, early decisions are less accurate: due to the large number of FP and FN decisions, $P_1$ and $P_2$ result in lower accuracy, precision as well as effectiveness. By contrast, as the most part of workflow has been executed, $P_7$ performs largely better but its decisions come too late to carry out effective preventive adaptations. Furthermore, the other four predictors are based on the checkpoints located on two parallel execution branches, and the critical path can be only determined at runtime due to on-demand service selection. If a task (checkpoint) is not on the critical path, it has thus a larger slack time and a longer delay can be tolerated. Therefore, from its local perspective, the execution is considered as running well and no warning will be reported. That explains why $P_3$, $P_5$, $P_9$ and $P_{10}$ have less FP decisions but a tremendous number of FN decisions, which lead to a poor performance on average.
2) Decisions based on multiple checkpoints: An additional checkpoint brings another chance to report (either true or false) warnings of SLA violations. Take $P_1$ for example, if a violation is failed to be predicted at $C_2$, it still has chance to be alerted at $C_7$; thus, compared to $P_2$, the TP number is greatly improved; meanwhile, only several FP decisions are additionally produced. On the other side, compared to $P_7$, although $P_9$ does get some quality degradation, but the decision time is improved by 2 execution steps. This is the same for $P_{10}$, compared to $P_1$ and $P_6$, both accuracy and effectiveness are significantly improved. However, as an extreme case, we assume that a predictor $P_2$ can draw adaptation decisions at all possible checkpoints. It is similar to runtime verification techniques introduced in Section III: whenever a deviation is estimated, the adaptation decision is made. In this case, any a predictor from $P_1$ to $P_9$ decides, $P_2$ will also decide. Thus, $P_2$ cannot have less FP numbers than $P_1$. Such a high FP number will result in a poor precision. From the above discussion, we can see that it is challenging to determine the best time for drawing both early and accurate adaptations decisions.
D. Experiment 2: Evaluation of Our Approaches
In the second experiment, both static and adaptive decision strategies presented in this paper are evaluated. The qualitative predictor $P_{q1}$ implements qualitative strategy by specifying the weight threshold $\rho_w$ to 0.65. The quantitative predictor $P_{q1}$ defines the evaluation function as $max = (1 - \tau_s) \ast g_c \ast \rho$, where $\tau_s$ is the weight of a suspicion, $g_c$ is the expected execution time of SBA defined in the global SLA, and $\rho$ is set to 5%. This function can tolerate greater deviation at the beginning of the execution (as $f_2$ in Figure 5). Additionally, the hybrid predictor $P_{h1}$ invokes $P_{q1}$ when the weight of suspicion is less than 0.65, and uses $P_{q1}$ otherwise. Finally, the adaptive predictor $P_{a1}$ implements a set of classifiers based on WEKA machine learning toolkit [15]. All the decisions reported in the first experiment are used to generate a set of suspicion records, which are organized in the dataset file.
During the training phase (Step 1 in Figure 6), all classifiers are used to learn the knowledge from the dataset, and their performances are evaluated by cross-validation. When a new suspicion is reported, the classifier with the best predictive accuracy is then used for the prediction (Step 2 in Figure 6). Each classifier requires at least 300 historical suspicion records ($min\_training\_size=300$); if the dataset contains more than 1000 records, it selects the 1000 most recent items ($max\_training\_size=1000$). The retraining is carried out for every 200 predictions.
Another 1000 executions are simulated to evaluate our approach. As a comparison, the three predictors with the best decision quality in the first experiment, namely $P_7$, $P_9$ and $P_{10}$, are also used. The performance of different predictors is shown in Table IV. First of all, we can see that the quality of all static and adaptive decision strategies can be considered on the same level. Please note that: 1) $P_{q1}$ decides later than the other three strategies, since it can decide only when a certain part of workflow has been executed. 2) As introduced, $P_{h1}$ is more critical than $P_{q1}$ and $P_{q1}$. Thus it accepts more suspicions, which can result in a lower precision whereas a higher effectiveness. 3) Static strategies can successfully prevent almost all of SLA violations (luckily, in this experiment, no SLA violation has been missed). Meanwhile, adaptive strategy has a low rate of false silence (5%).
TABLE IV
EXPERIMENTAL RESULTS: EVALUATE DIFFERENT DECISION STRATEGIES
<table>
<thead>
<tr>
<th>Metrics</th>
<th>P1</th>
<th>P2</th>
<th>P3</th>
<th>P4</th>
<th>P5</th>
<th>P6</th>
<th>P7</th>
<th>P8</th>
<th>P9</th>
<th>P10</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>0.91</td>
<td>0.82</td>
<td>0.84</td>
<td>0.94</td>
<td>0.95</td>
<td>0.92</td>
<td>0.94</td>
<td>0.95</td>
<td>0.95</td>
<td>0.94</td>
</tr>
<tr>
<td>e</td>
<td>0.93</td>
<td>0.93</td>
<td>0.81</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.95</td>
<td>0.95</td>
</tr>
<tr>
<td>p</td>
<td>0.91</td>
<td>0.74</td>
<td>0.87</td>
<td>0.89</td>
<td>0.91</td>
<td>0.86</td>
<td>0.93</td>
<td>0.86</td>
<td>0.86</td>
<td>0.93</td>
</tr>
<tr>
<td>dt</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
</tbody>
</table>
Please note that: 1) $f_2$ means the end of the execution, as $f_2$ is not used for classification, but classification is used for classification. 2) $f_2$ means the end of the execution, as $f_2$ is not used for classification, but classification is used for classification.
Secondly, compared to checkpoint-based predictors (single-step predictions), the results show that our approach can make more accurate adaptation decisions as early as possible. First of all, the decision quality of our approach can be considered on the same level as $P_7$. But our approach improves the decision time by more than one execution step. Secondly, $P_6$ decides a little earlier than our approaches (less than one execution step), but our approach has a remarkable improvement in accuracy (over 10%), effectiveness (5% higher on average) and precision (15% better). Finally, compare to $P_{10}$, our approaches have almost the same decision time but better accuracy ($\approx 10\%$) as well as higher effectiveness ($\approx 15\%$).
E. Experiment 3: Evaluations over Different Workflows
In order to evaluate the performance of our approach over different workflows, we create three fictitious workflows without real meanings: 1) a linear workflow with 9 tasks (a single execution path); 2) a medium workflow with 17 tasks and 6 execution paths; 3) a complex workflow with 30 tasks and 6 execution paths. For each workflow, we firstly simulate 300 executions to initiate the dataset of suspicion records, and then 1000 simulations are carried out by using $P_{qt}$, $P_{qt}$, $P_{hb}$ and $P_{ad}$. Table V and Table VI summarize respectively the accuracy and the precision of different predictors. From the experimental results, we have the following observations: 1) our approach is not limited to a specific workflow and it can perform well for different kinds of workflows; 2) adaptive strategy makes decisions based on the knowledge learned from the past executions, thus its performance can always be maintained at a high level when used for different workflows; 3) the performances of static strategies depend on the predefined evaluation function: as the second experiment demonstrates, a suitable evaluation function can perform as well as adaptive strategy; otherwise, it may get a little performance degradation but it is still fairly good (compared to other approaches).
VII. CONCLUSION AND FUTURE WORK
This paper discusses an important problem for preventing SLA violation: how to make correct runtime decision to accurately trigger preventive adaptation for on-demand SBA execution. In this paper, we have presented an online prediction approach which draws adaptation decisions through two-phase evaluations. Based on a series of realistic simulations, our approach exhibits the following desirable properties: 1) it is able to draw accurate adaptation decisions for different workflows: almost all violations can be successfully predicted and alerted; meanwhile, only few unnecessary adaptations will be triggered; 2) with the same accuracy level, our approach can decide as early as possible; 3) by using static strategies, our approach can still make accurate and timely adaptation decision when no (sufficient) historical information is available.
Our future work will concentrate on providing more flexible runtime management of SBA. First of all, the PERT technique can be used for runtime optimization of SBA instances. For example, by using PERT charts, it is easy to detect large slack time for a task and thus it can rebind to a slower but cheaper service in order to reduce the cost. Furthermore, we are integrating branch prediction techniques into PERT charts, which can help to further reduce unnecessary adaptations. The re-construction of PERT chart will take into account the paths that are most likely to be executed. Finally, having positive results based on the realistic simulations, we are going to implement and integrate our approach into an existing business process management system and evaluate its performance.
ACKNOWLEDGMENT
The research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under grant agreement 215483 (S-CUBE).
REFERENCES
|
{"Source-Url": "https://inria.hal.science/hal-00705289/file/chen_wang_scc_2012.pdf", "len_cl100k_base": 11125, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32277, "total-output-tokens": 12378, "length": "2e13", "weborganizer": {"__label__adult": 0.0003876686096191406, "__label__art_design": 0.0004794597625732422, "__label__crime_law": 0.0003795623779296875, "__label__education_jobs": 0.0014667510986328125, "__label__entertainment": 0.00015354156494140625, "__label__fashion_beauty": 0.00023365020751953125, "__label__finance_business": 0.0007925033569335938, "__label__food_dining": 0.000377655029296875, "__label__games": 0.0007901191711425781, "__label__hardware": 0.001308441162109375, "__label__health": 0.0007648468017578125, "__label__history": 0.0005016326904296875, "__label__home_hobbies": 9.709596633911131e-05, "__label__industrial": 0.0005283355712890625, "__label__literature": 0.0005464553833007812, "__label__politics": 0.0003709793090820313, "__label__religion": 0.000461578369140625, "__label__science_tech": 0.184326171875, "__label__social_life": 0.00012731552124023438, "__label__software": 0.0192413330078125, "__label__software_dev": 0.78515625, "__label__sports_fitness": 0.0002841949462890625, "__label__transportation": 0.00074005126953125, "__label__travel": 0.0002663135528564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50397, 0.04612]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50397, 0.37012]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50397, 0.91176]], "google_gemma-3-12b-it_contains_pii": [[0, 986, false], [986, 6640, null], [6640, 11780, null], [11780, 17718, null], [17718, 23720, null], [23720, 30025, null], [30025, 36230, null], [36230, 43436, null], [43436, 50397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 986, true], [986, 6640, null], [6640, 11780, null], [11780, 17718, null], [17718, 23720, null], [23720, 30025, null], [30025, 36230, null], [36230, 43436, null], [43436, 50397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50397, null]], "pdf_page_numbers": [[0, 986, 1], [986, 6640, 2], [6640, 11780, 3], [11780, 17718, 4], [17718, 23720, 5], [23720, 30025, 6], [30025, 36230, 7], [36230, 43436, 8], [43436, 50397, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50397, 0.14765]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
64b2786ebb4d4ccda8b03a43a79440e8138ff90a
|
[REMOVED]
|
{"len_cl100k_base": 10239, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31510, "total-output-tokens": 11353, "length": "2e13", "weborganizer": {"__label__adult": 0.0006117820739746094, "__label__art_design": 0.0005826950073242188, "__label__crime_law": 0.000736236572265625, "__label__education_jobs": 0.00045108795166015625, "__label__entertainment": 0.000133514404296875, "__label__fashion_beauty": 0.00026035308837890625, "__label__finance_business": 0.0005521774291992188, "__label__food_dining": 0.0005445480346679688, "__label__games": 0.0010480880737304688, "__label__hardware": 0.00782012939453125, "__label__health": 0.0010480880737304688, "__label__history": 0.0005106925964355469, "__label__home_hobbies": 0.00020956993103027344, "__label__industrial": 0.0014801025390625, "__label__literature": 0.00033736228942871094, "__label__politics": 0.0004777908325195313, "__label__religion": 0.0008177757263183594, "__label__science_tech": 0.382080078125, "__label__social_life": 0.00010001659393310548, "__label__software": 0.0087890625, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.0005812644958496094, "__label__transportation": 0.0013599395751953125, "__label__travel": 0.0002818107604980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42055, 0.08527]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42055, 0.57653]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42055, 0.86854]], "google_gemma-3-12b-it_contains_pii": [[0, 4237, false], [4237, 9866, null], [9866, 15199, null], [15199, 18199, null], [18199, 23217, null], [23217, 27000, null], [27000, 30543, null], [30543, 34001, null], [34001, 34131, null], [34131, 39900, null], [39900, 42055, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4237, true], [4237, 9866, null], [9866, 15199, null], [15199, 18199, null], [18199, 23217, null], [23217, 27000, null], [27000, 30543, null], [30543, 34001, null], [34001, 34131, null], [34131, 39900, null], [39900, 42055, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42055, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42055, null]], "pdf_page_numbers": [[0, 4237, 1], [4237, 9866, 2], [9866, 15199, 3], [15199, 18199, 4], [18199, 23217, 5], [23217, 27000, 6], [27000, 30543, 7], [30543, 34001, 8], [34001, 34131, 9], [34131, 39900, 10], [39900, 42055, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42055, 0.12319]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
daa8303a6cd73c807fb5d8282a7f4a678e1419ca
|
A Methodology and Tool for Concurrent Fault Injection
ZhongYin Zhang
University of Nebraska-Lincoln, zzy4032@gmail.com
A METHODOLOGY AND TOOL FOR CONCURRENT FAULT INJECTION
by
ZhongYin Zhang
A THESIS
Presented to the Faculty of
The Graduate College at the University of Nebraska
In Partial Fulfilment of Requirements
For the Degree of Master of Science
Major: Computer Science
Under the Supervision of Gregg Rothermel and Witty Srisa-an
Lincoln, Nebraska
August, 2014
A METHODOLOGY AND TOOL FOR CONCURRENT FAULT INJECTION
ZhongYin Zhang, MS
University of Nebraska, 2014
Adviser: Gregg Rothermel and Witty Srisa-an
As the speed of microprocessors tails off, utilizing multiple processing cores per chip is becoming a common way for developers to achieve higher performance. However, writing concurrent programs can be a big challenge because of common concurrency faults. Because concurrency faults are hard to detect and reproduce, traditional testing techniques are not suitable. New techniques are needed, and these must be assessed. A typical method for assessing testing techniques is to embed faults in programs using mutation tools, and assess the ability of techniques to detect these. Although mutation testing techniques can be used to represent common faults, approaches for representing concurrency faults have not been created. In this paper, we introduce a methodology for injecting mutations related to concurrency faults, focusing on four common concurrency fault patterns as mutant operators. We implement the approach in the Eclipse IDE. We empirically study our approach’s effectiveness by using it to seed various types of concurrency faults based on the four fault patterns in a set of programs. This approach generates many times more mutants than can be seeded by hand. We then execute the original programs and these mutants. We characterize the mutants in terms of detectability as part of our study. The results show that using the proposed tool, concurrent fault injection tool (CFIT) is feasible and efficient.
I would like to thank my two advisors. Dr. Gregg Rothermel and Dr. Witty Srisa-an for their invaluable guidance, support and encouragement over the past few years.
Dr. Gregg Rothermel led me into a wonderful research area and taught me how to do rigorous research. As a good mentor, he provided me with valuable advice all the time and gave me advice and support when I ran into trouble in my life. I would like to thank him for everything he did for me.
Dr. Witty Srisa-an has also had a great influence on me. He led me into the gorgeous system’s area and taught me how to be a good programmer. I would like to thank him for his infinite encouragement and patience. Without him, I can not imagine how I could achieve the goal that I previously thought was not possible. All his advising and mentoring are valuable to me and I will remember it now and forever. I do not know how I can possibly thank him enough. I hope I can repay him by making him proud of me in the future.
I would like to thank Dr. Anita Sarma for offering time to serve as my committee member, reviewing my thesis and delivering me valuable feedback and suggestion.
I would like to thank all my friends in the Esquared lab and UNL, Tingting Yu, Pingyu Zhang, Jianguo Wang, Jian Hu, Yin Guo, Miao Zhen, Nic Colgrove, Thammasak Thianniwet, Yalan Liang, etc. I can not imagine how I could survive without you through all these years. Especially, I would like to thank Jianguo Wang, Jian Hu, Yin Guo and Pingyu Zhang for huge help when I had trouble. I appreciate everything they did for me.
Finally, I would like to thank my family, Mom, Dad, and my loving spouse - Zhen Hu, for their infinite love and encouragement.
Contents
Contents iv
List of Figures vi
List of Tables vii
1 Introduction 1
2 Background 4
2.1 Types of Concurrency Faults 4
2.2 Mutation Testing 7
2.3 Mutation Testing Tool 8
2.4 Eclipse 8
2.4.1 Architecture of Eclipse Plug-ins 9
2.4.1.1 Extension Points 9
2.4.1.2 Extensions 9
2.4.2 C/C++ Development Tooling (CDT) 11
2.4.2.1 Visitor Pattern API for ASTs 12
3 Design and Implementation 13
3.1 Fault Patterns 13
3.1.1 Remove Unlock ........................................... 15
3.1.2 Remove Lock ............................................. 16
3.1.3 Remove Paired Lock and Unlock (Critical Section Violation) . 16
3.1.4 Switch Lock Order ....................................... 17
3.2 Implementation of a Concurrency Fault Injection Tool .............. 18
3.2.1 CFIT Architecture ......................................... 18
3.2.1.1 Injection Action Extension ............................ 18
3.2.1.2 Mutation System ....................................... 19
3.2.1.3 Mutant Property ........................................ 20
3.2.1.4 Database .................................................. 22
3.2.1.5 Hibernate .................................................. 22
3.2.1.6 CFIT Working Process ................................... 23
4 Empirical Study ..................................................................... 27
4.1 Purpose of Study ................................................ 27
4.2 Objects of Study .................................................. 28
4.3 Study Operation .................................................. 30
4.4 Result .............................................................. 30
4.5 Discussion ........................................................ 35
5 Conclusion and Future Work .................................................. 38
Bibliography ............................................................................. 39
List of Figures
<table>
<thead>
<tr>
<th>Figure</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.1</td>
<td>Deadlock circular wait</td>
<td>7</td>
</tr>
<tr>
<td>2.2</td>
<td>Eclipse plug-ins</td>
<td>10</td>
</tr>
<tr>
<td>2.3</td>
<td>Extensions and extension points</td>
<td>11</td>
</tr>
<tr>
<td>3.1</td>
<td>Bug patterns</td>
<td>14</td>
</tr>
<tr>
<td>3.2</td>
<td>Concurrent fault types</td>
<td>15</td>
</tr>
<tr>
<td>3.3</td>
<td>Class ASTVisitor in DOM AST</td>
<td>21</td>
</tr>
<tr>
<td>3.4</td>
<td>Class LockManagementVisitor</td>
<td>21</td>
</tr>
<tr>
<td>3.5</td>
<td>CFIT procedure</td>
<td>23</td>
</tr>
<tr>
<td>3.6</td>
<td>Snapshot of programs after modifications made by CFIT</td>
<td>26</td>
</tr>
<tr>
<td>4.1</td>
<td>Experiment procedure</td>
<td>31</td>
</tr>
</tbody>
</table>
List of Tables
3.1 Concurrency Fault Taxonomy .............................................. 14
4.1 Mutant Data ................................................................. 31
4.2 Remove Unlock .............................................................. 32
4.3 Remove Lock ................................................................. 32
4.4 Remove Paired Lock and Unlock ........................................... 33
4.5 Switch Lock Order ........................................................... 33
4.6 The total numbers of detected mutants based on all 4 mutant operators and the number of detected mutants based on the percentage of test cases 34
4.7 Deadlocks for Base Program .................................................. 35
Chapter 1
Introduction
As the speed of microprocessors tails off, utilizing multiple processing cores per chip is becoming a common way for developers to achieve higher performance. To do this, developers shift from writing sequential code to employing thread-level parallelism. Writing dependable concurrent programs can, however, be challenging, because improper synchronization of access to shared resources can lead to runtime errors such as deadlocks, critical section violations, livelock, and starvation which are difficult to detect, isolate, and correct during pre-deployment.
Typically, a concurrent program consists of two or more processes or threads that cooperate in performing a task[8]. Since there are multiple processes or threads executing simultaneously, shared variables or resources may be accessed concurrently. Without proper protection, these accesses can result in intermittent runtime errors that occur only when under specific execution interleavings or occurrences of specific events.
Currently, there are many techniques used to detect concurrency faults, such as data race detection[13][28][33], atomicity violation detection[14], pattern analysis[25], and fault-localization[26][37][31]. Moreover, common testing techniques involving per-
formance testing and stress testing are always used to deal with concurrency faults. However, performance testing and stress testing are very time consuming and it can be difficult to reproduce the concurrency faults they detect. Thus, we need better testing techniques to address concurrency issues.
Currently, researchers use mutation testing approaches to represent common but hard to detect faults, in order to make testing more efficient. Mutation testing is a fault-based software testing technique that uses mutants that slightly modify a piece of code in a program to check the quality of a new testing technique and reproduce faults that are hard to detect[16][11]. There are several existing approaches for defining mutation operators for concurrent programs[35][27][15][24][38][22]; however, these approaches still rely on using manually injected mutants and output-based test oracles.
Injecting mutants manually is neither efficient nor complete, especially when it is applied to modern concurrent software systems that tend to have large code bases. In addition, output based oracles are not sufficient because occurrences of concurrency faults do not always lead to erroneous outputs; therefore, they often elude traditional testing approaches that rely on output-based oracles for fault detection. As such, internal test oracles, which detect faults by monitoring aspects of internal program and system states[39] can be more effective for detecting these types of faults.
In previous work[39], Yuetal. empirically investigated the use of internal test oracles based on manually seeding mutants in 5 applications. The results show that internal oracles can be more effective than output-based oracles. However, due to the fact that manual seeding of mutants is time consuming and inaccurate, an automatic concurrency fault seeding tool is necessary. In this paper, we introduce an automatic concurrent fault injection tool (CFIT) based on an Eclipse plug-in for C/C++. We use four common concurrent fault patterns as mutant operators. We then empirically
study our tool’s effectiveness by using it to seed various types of concurrency faults based on four fault patterns in the same five programs. This approach generates many times more mutants. We then execute the original programs and these mutants. We characterize these mutants as part of our study. The results show that using the proposed tool, CFIT, is feasible and efficient.
The remainder of this thesis is organized as follows. In Section 2, we provide background information relevant to the remainder of the thesis. We describe the design and implementation of our concurrent fault injection tool (CFIT) in Section 3. Section 4 presents our empirical study. Conclusions and future work are discussed in Section 5.
Chapter 2
Background
In this chapter, we discuss background information related to this work. First, we describe and provide examples of common concurrency faults. We then describe mutation testing approaches and existing tools to support such testing. Last, we provide an overview of the Eclipse plug-in architecture.
2.1 Types of Concurrency Faults
In this section, we describe four types of concurrency faults: critical section violations, deadlock, livelock, and starvation.
Critical section violations occur when two or more processes or threads attempt to access and update a shared resource at the same time. This situation is very common in multi-threaded or multi-process systems. This type of fault occurs when shared resources are not properly protected by lock operations that synchronize concurrent access to those resources. As an example, suppose there are two processes P1 and P2, both that can perform write operations on a variable \( a \). Initially, \( a \) is set to 0. If \( a \) is not properly protected, both P1 and P2 can concurrently write to \( a \). Thus, the two
processes race to update the shared resource. As such, the final value of \( a \) depends on who has the last access. Code snippet A provides an example of this type of common data race in an application. Function autoIncrement updates global variable \( a \). In a scenario where two threads execute autoIncrement simultaneously, the final value of \( a \) may not be 2.
```
1. int a = 0;
2. void autoIncrement() {
3. //lock();
4. a++;
5. //unlock();
6. }
7. main() {
8. autoIncrement();
}
```
**Code snippet A**
Deadlock is a situation in which more than one thread or process are blocked permanently because each is waiting to access a shared resource that is blocked by one of the others at that time. There are four conditions that must be met for deadlock to occur: mutual exclusion (only one process or thread can access a shared resource in a critical section at a time), hold-and-wait (a process or thread may hold a shared resource while awaiting assignment of other resources), no preemption (no resource can be released from a process or thread holding it) and circular wait (each process or thread holds at least one shared resource requested by the other processes or threads)[36]. An example is provided in Figure 2.1. There are two shared resources, RS1 and RS2, and two processes, P1 and P2; RS1 is held by P1 and RS2 is held by P2. There is no preemption and each process has exclusive access to the held resource.
Because P2 needs RS1 which is held by P1, and P1 needs RS2 which is held by P2, a circular wait occurs. Code snippet B indicates a common instance of such a deadlock scenario in an application. If two threads are used in this program, there is a specific interleaving sequence T1(1), T2(6), T1(2), T2(7) that can cause a deadlock to occur.
```c
void RS1() {
...
1. lock1();
2. lock2();
3. // critical section.
4. unlock2();
5. unlock1();
...
}
void RS2() {
...
6. lock2();
7. lock1();
// critical section
8. unlock2();
9. unlock1();
...
}
```
**Code snippet B**
A livelock is similar to a deadlock except that processes or threads are not blocked permanently by each other. Rather, they are constantly processed by the CPU. An example is when a spinlock instead of blocking is used to synchronize a region. Two threads can be spinning on a lock. They are both executing on the processor but
without making any progress toward completion.
Starvation is a situation in which a process or thread can never access shared resources. As an example, suppose three processes with three different priority levels need to access a shared resource. If the process with the highest priority keeps using the resource, the other two lower priority processes would not be able to access the resource.
### 2.2 Mutation Testing
Mutation testing is a fault-based software testing technique that uses mutants that slightly modify a piece of code of the program to check the quality of a new testing technique and reproduce faults that are hard to detect[16][11]. Mutation testing has been studied since 1977. Mutation testing is based on the Competent Programmer Hypothesis and the Coupling Effect Hypothesis[11]. The Competent Programmer Hypothesis assumes that programmers are competent and write programs that are close to being correct. A correct program can be created from an incorrect program that includes syntactically small faults and with a few small code modifications. The
Coupling Effect Hypothesis indicates that test cases that distinguish all programs differing from a correct one by only simple errors are so sensitive that they can distinguish programs with more complex differences. So mutation testing can be used to simulate complex real-world bugs, especially for bugs that are hard to detect and reproduce.
2.3 Mutation Testing Tool
Without a fully automated mutation testing tool, creating mutants can be a cumbersome process, especially for large programs. Therefore, the development of mutation testing tools is necessary. Various mutation testing tools have been developed. MuJava[27] is a mutation tool for Java that includes class-level operators. MOTHRA[12] is a mutation testing tool for Fortran. MILU[21] is an efficient and flexible mutation testing tool designed for both first order and higher order mutation testing in C. Jester is the first open source mutation testing tool for Java. Its two mutation operators are very similar; one changes 0 to 1 and the other replaces predicates with TRUE and FALSE[18]. Pester[18] is a Python version of Jester. Nester[1] is an open source tool for C Sharp. Moreover, there are several mutation tools like INSURE++[30], PLEXTEST[20], CERTITUDE[9] available commercially.
2.4 Eclipse
Eclipse is an integrated development environment (IDE). It is written mostly in Java. Typically, it consists of a base workspace and an extensible plug-in system for customizing the environment[2]. Plug-ins can be used to build arbitrary applications in different programming languages under different development environments. In
other words, everyone can contribute plug-ins and Eclipse can use its strong extensible plug-in system to integrate various features in a single working platform.
2.4.1 Architecture of Eclipse Plug-ins
Eclipse is not just a single working platform, but rather a small kernel with a plug-in loader surrounded by thousands of plug-ins. The small kernel is based on a container that is implemented by OSGi R4 and provides the environment to control the plug-ins execution[10]. Each plug-in contributes itself in a structured manner, may be based on services provided by another plug-in and each may in turn provide services on which other plug-ins may rely. An Eclipse plug-in, typically, consists of two components, extensions and extension points, respectively. The concept of extensions and extension points allow functionality to be contributed to plug-ins by other plug-ins(see Figure 2.2).
2.4.1.1 Extension Points
When a plug-in wants to allow other plug-ins to extend portions of its functionality, it declares an extension point. The extension point declares a contract, typically, a combination of XML markup and Java interfaces, that extensions must conform to[2]. Plug-ins must implement that contract in their extension if they want to plug in to that extension point.
2.4.1.2 Extensions
Extensions are plug-ins which contribute an extension. Typically, these plug-ins provide an extension based on the contract that was defined by a corresponding extension point. Extensions can be either code or data (see Figure 2.3).
Figure 2.2: Eclipse plug-ins
2.4.2 C/C++ Development Tooling (CDT)
As we mentioned, in Eclipse everything is a contribution (plug-in). Because of its strong extensible plug-in system, Eclipse is not only an IDE for Java programming, but also an IDE for other popular programming languages like C++ and PHP. When Eclipse was used only as a Java programming IDE, the development tooling in Eclipse was Java development tooling (JDT). When Eclipse became a general application platform, each programming language provided its own corresponding development tooling. For C/C++, C/C++ development tooling (CDT) is an Eclipse plug-in that transforms Eclipse into a powerful C/C++ IDE. It can offer many of the features Eclipse provides to Java developers to C/C++ developers. Basically, the core of CDT consists of a preprocessor, parsers (C/C++), an abstract syntax tree (AST), an AST rewrite API, semantic analysis (name resolution), an indexer and an Index API. The tool we create in this work is not development tooling or a compiler, so we rely on only three core parts of CDT, a preprocessor that converts text into a token stream, parsers (C/C++) that convert the token stream into an AST and an abstract syntax tree (AST) representation of the syntactic structure of source code written in C/C++.
2.4.2.1 Visitor Pattern API for ASTs
An abstract syntax tree (AST) is a tree representation of abstract syntactic structure of source code written in a programming language[3]. Basically, an AST is used for semantic analysis where the compiler checks whether the element of the programming language is correctly utilized. However, traversing an AST is not an easy job. The problem here is that the type of each node is different. For example, the AST of $a = b + c$ has three different nodes, an assignment operator, a variable id and an arithmetic operator. Since each node may correspond to a class, the AST traversal may go through all the classes, which makes the program hard to read and maintain. The solution to this problem is to utilize a design pattern called the visitor pattern instead of sifting through all the classes. The visitor pattern lets us traverse the AST using different visitors. More accurately, each node of the AST has an accept method accepting a call from a visitor that performs its custom traversal. So we can use the visitor pattern to traverse a particular block, statement, expression or declaration in a source file.
Chapter 3
Design and Implementation
3.1 Fault Patterns
In this section, we present a set of fault patterns designed as mutants, with which to seed a healthy C/C++ program. First, we created a concurrency fault taxonomy to identify the reasons for the most common concurrency faults. We used the ROS[4] bug repository as a resource to do this. ROS stands for Robot Operating System, and is a flexible framework for writing robotics software. To collect the most common concurrency faults, we used the terms deadlock, synchronization, mutex and race condition as keywords to query for faults related to concurrency. Table 3.1 presents data on keywords and real faults. Figure 3.1 presents the real reasons these faults occur. We can see that most concurrency faults are associated with lock() or unlock() methods. Figure 3.2 represents the most common fault types. We found that most faults can generate deadlock or race conditions. So according to the data we collected, we designed four types of mutant operators. They are Remove Unlock, Remove Lock, Remove Paired Lock and Unlock and Switch Lock Order, respectively.
Table 3.1: Concurrency Fault Taxonomy
<table>
<thead>
<tr>
<th>Keywords Containing</th>
<th>Deadlock</th>
<th>Race Condition</th>
<th>Synchronization Mutex</th>
<th>Multiple Thread</th>
<th>Simultaneous</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Keywords</td>
<td>91</td>
<td>246</td>
<td>72</td>
<td>189</td>
<td>62</td>
<td>55</td>
</tr>
<tr>
<td>Related</td>
<td>5</td>
<td>15</td>
<td>3</td>
<td>3</td>
<td>6</td>
<td>2</td>
</tr>
</tbody>
</table>
Figure 3.1: Bug patterns
3.1.1 Remove Unlock
Improper use of unlock or missing unlock faults are very common in concurrent programs. This type of fault occurs when developers do not use unlock() functions properly. For example, an unlock() may not be paired with its lock() in cases where interactions among threads are complicated. Meanwhile, this type of fault can cause deadlock. The Remove Unlock operator is the mutant used to delete one unlock method in concurrent programs to simulate a fault due to a missing unlock. Program A provides a simple example of this type of fault.
```
P1{
1. Lock(mutex);
2. x++;
3. ...
4. //Unlock(mutex); //fault
}
```
Program A
3.1.2 Remove Lock
Incorrect or missing locks are another very common type of fault in concurrent programs. This type of fault occurs due to improper use of lock operations in a program that may require multiple locks to be managed. The Remove Lock operator is the mutant used to delete locks in concurrent programs to simulate missing lock faults. Program B provides a simple example of this type of fault.
```
P2{
1. //Lock1() //fault
2.
3. Lock2()
4.
5. Lock3()
6.
}
```
Program B
3.1.3 Remove Paired Lock and Unlock (Critical Section Violation)
Critical section violations are a common faults in concurrent programs. This type of fault occurs if a critical section is not protected properly, allowing it to be accessed by multiple threads at one time. Typically, this type of fault is the main reason for critical section violations. The Paired Lock and Unlock operator is the mutant used to delete paired lock and unlock methods in the same block in concurrent programs to simulate faults due to critical section violations. Program C provides a simple example of this type of fault.
Program C
3.1.4 Switch Lock Order
Incorrect lock order is another cause of concurrency faults in concurrent programs. This type of fault occurs due to improper use of lock operations in programs that require multiple locks to be managed. The Switch Lock Order operator is the mutant used to change the lock order in the same block in a concurrent program, to simulate this class of fault. Program D provides a simple example of this type of fault. $M_{P4}$ represents the program after injecting a mutant.
4. Unlock2();
5. Unlock1();
}
Program D
3.2 Implementation of a Concurrency Fault Injection Tool
The Concurrent Fault Injection Tool (CFIT) is our concurrency fault mutation system for the C/C++ programming languages. It automatically generates mutants for concurrent mutation testing based on the aforementioned fault patterns. CFIT is developed as an Eclipse plug-in. It can analyze single C/C++ source files or a whole C/C++ project. Mutants of a C/C++ file are generated inside conditional compilation constructs in the original source file and activated via an automatically generated mutant header file.
3.2.1 CFIT Architecture
CFIT consists of four components: Injection Action Extension, Mutation System, Mutant Property, and Database.
3.2.1.1 Injection Action Extension
The Injection Action Extension is a module that performs fault seeding. Its main GUI is in the form of a pop-up menu. It is an extension connecting to a particular extension point, org.eclipse.ui.popupMenus. This extension point is used to add new actions to context menus defined by other plug-ins. To use this plug-in, the user only
needs to right click the project that is the target for injected faults. Next, on the pop-up menu, the user selects the fault injection option. Mutants will be injected automatically and the mutant source file and mutant header file will automatically be generated in a user specified path (see Figure 3.6).
### 3.2.1.2 Mutation System
The Mutation System is the core component of CFIT. It consists of three parts: CDT parser, abstract syntax tree (AST) and mutant property.
CDT is implemented in the C/C++ development tooling. Because CDT has a full C/C++ parser and AST, we decided to use the CDT parser and AST directly. The CDT parser is the component used to parse C/C++ source code. It takes a C/C++ program as input and parses the source into a token list. The token list, typically, will generate an abstract syntax tree. However, because the official CDT does not let the user access the AST, we downloaded a developed version of the CDT package which includes a test mode that lets the developer use a DOM AST component and a debugging component.
The main package for the AST for C/C++ is called org.eclipse.cdt.ui.tests.DOMAST. It is located as a sub-project of CDT called org.eclipse.cdt.ui.tests. This package is mainly used for traversing an AST in the form of a GUI so that the CDT developer can retrieve the ASTNode information during development. Each C/C++ source file is represented as a subclass of the ASTNode class. Each specific AST node provides specific information about the object it represents. To traverse an AST and obtain ASTNode specific information, we use the visitor pattern. This lets us write user defined plug-ins that process the AST. We built subclasses based on the visitor pattern extending the ASTVisitor class (Figure 3.3), which is an abstract base class to which visitors can traverse AST nodes and override methods that users specify
for different subclasses. Moreover, because the CDT DOM AST has its own built-in node classes that each has an accept (ASTVisitor) method, we do not need to build these accept methods by ourselves. In other words, we only need to create a visitor object extending ASTVisitor and override several overloaded visit methods for each node type, and then we can process the AST in forms that we want.
Figure 3.4 provides an example, showing a subclass of ASTVisitor. The LockManagementVisitor class is used to obtain all lock methods in one IASTTranslationUnit and their AST node-specific information in a single C/C++ source file. IASTTranslationUnit is a compilable unit of source. Typically, we consider it to be the root of an AST. It accepts a user defined visitor class (e.g. LockManagementVisitor) and processes a particular traversal based on several overridden visitor methods. Since we can get ASTNode information such as line number, parent ASTNode, children ASTNodes, etc. in a source file, we can operate on any statements, expressions, or variables in any desired manner. For example, if we want to remove one specific lock method in a specific compound statement, we only need to get this specific lock method’s ASTNode information based on a user-defined visit method in a specific subclass that extends the ASTVisitor class. Then according to the ASTNode’s specific information, we can easily locate that lock method in a source file and insert the conditional compilation directives that implement the mutation using specific string operations.
3.2.1.3 Mutant Property
The Mutant Property is the component used to retrieve user specified mutant operators as the input for the mutation system. Right now, as described earlier, we have four mutant operators: Remove Unlock, Remove Lock, Switch Lock Order and Remove Paired Lock and Unlock. We use Java properties file format to set up the
Figure 3.3: Class ASTVisitor in DOM AST
Figure 3.4: Class LockManagementVisitor
rules for mutants. If we want to open a mutant operator, we set the property value to “yes”. If not, we set the property value to “no”. For example, RemoveUnlock=yes tells the mutation system to activate the remove unlock pattern during runtime. Each time, we seed only one type of mutant: if one mutant operator is opened, the other three must be closed.
The mutant template is another Java properties file that is used to obtain lock or unlock information in an application. For example, if we want to seed a Remove Unlock pattern in an application, we need to specify the unlock method name in the mutant template. For example, Unlocker=pthread_mutex_unlock represents the case of a Remove Unlock pattern opening in which the mutant system will seed the mutant only when the unlock method name of the specific application is pthread_mutex_unlock.
3.2.1.4 Database
Due to the large number of mutants generated by CFIT, we use a database to conveniently track each mutant’s specific information, including the name of the injected source file, the fault pattern, and the location of the mutant (line number).
3.2.1.5 Hibernate
Because our database is designed with respect to an object relational mapping model, we chose the Hibernate ORM as our database framework. However, due to the way Eclipse RCP (and Eclipse in general) delegates class loading to buddy plug-ins[5], it is necessary to wrap third-party libraries in a plug-in to ensure that the correct context class loading occurs at runtime. Hibernate is an open source software providing a framework with which to map an object oriented model to a traditional relational database[7]. Because Hibernate is a third party library for Eclipse RCP, importing Hibernate into a single Eclipse plug-in project will not activate the database. To solve
this problem, we built another plug-in project just for the database part, imported all the libraries, and used this standalone plug-in as a dependency of CFIT. Then the database can be active during the CFIT run-time.
### 3.2.1.6 CFIT Working Process
Figure 3.5 represents the working process of CFIT. The mutation system takes the AST from the CDT parser and the mutant property that a user has defined as input and generates a mutant in the form of conditional compilation in the source code and a mutant header file as the mutant switch. Each mutant is represented in the form “FaultMutantPattern_MutantId”. For example, if mutant operator properties activate the Remove Unlock pattern and the mutant template sets unlock to pthread_mutex_unlock, Fault_Remove_Unlock_m0 will be generated in the form of conditional compilation. FaultRemoveUnlock_m0 indicates that the mutant removes one unlock method in the source and its id is listed as 0.
As an example, Program E illustrates how a mutant is generated in a source file.
At the same time, a mutant switch corresponding to that mutant is generated in the mutant header file “sourceFileName_mutant.h”. Each mutant header file contains a certain quantity of mutants starting with two slashes that can also be considered as a comment in a regular program. Each mutant is represented in the form of a #define directive that defines a constant and creates a macro. If a mutant needs to be active, we only need to remove the two slashes and then the mutant will switch from comment to macro.
Program F is a simple example showing how a mutant header file works. We combine program E and program F to show how a mutant is activated. In program F, when we remove two slashes from the first line, #define FAULT_unlock_remove_m0 0 will be activated from comment status. At this point, FAULT_unlock_remove_m0 is defined and the constant value of this definition is 0. In other words, it is defined. Returning to program E, line 1 represents whether FAULT_unlock_remove_m0 is defined, so the routine will go to line 2 that does nothing, omitting the call to the pthread_mutex_unlock(mutex) method, and then go to line 5 and continue. If we switch the first line of the mutant header file from macro status to comment again, the mutant FAULT_unlock_remove_m0 will be closed, and then if we rerun program E, line 4 will be executed. So we can see that when one type of mutant is set, all feasible mutants will be seeded in the source file in the form of conditional compilation and be listed in the mutant header file. It is convenient to open and close a mutant by just deleting two slashes or adding two slashes back.
```
P5{
...
1. #ifdef FAULT_unlock_remove_m0
2.
3. #else
4. pthread_mutex_unlock(mutex);
```
5. #endif
...
}
Program E
///#define FAULT_unlock_remove_m0 0
///#define FAULT_unlock_remove_m1 0
///#define FAULT_unlock_remove_m2 0
...
Program F
Figure 3.6: Snapshot of programs after modifications made by CFIT
Chapter 4
Empirical Study
In this chapter, we provide an empirical evaluation of the proposed framework. We focus on its efficiency and ability to generate challenging mutants that can be helpful in studying techniques for uncovering difficult to detect concurrency faults.
4.1 Purpose of Study
The purpose of this study is to evaluate the feasibility of the approach of using an automation injection tool instead of manually injecting concurrency faults in studies of testing, and assess the efficiency of mutant generation and characteristics of the mutants that can be exposed. We consider the following research questions:
**RQ1:** Whether and to what extent are mutants generated by CFIT detectable?
**RQ2:** Are the mutants not too easily detectable?
**RQ3:** Is our tool efficient enough?
4.2 Objects of Study
To evaluate our tool and methodology, we chose five concurrent programs. They include BBUF, which is an implementation of the producer and consumer program, AGET, which is a multithreaded FTP download application, PFSCAN, which is a parallel file scanner, BZIP[6], which is a multithreaded compression program, and DININGPHILOSOPHER, which is an example from the Oracle Thread Analyzer[29]. The reason we select these programs for our study is because they include real-world programs, commonly used concurrency benchmarks and commercial tools. Furthermore, these applications have been used in prior studies of techniques for testing for concurrency faults[39].
Because our object programs are not distributed with test cases, we needed to generate test cases for them. We consider three factors in generating test cases: test input data, other relevant parameters, and specified thread execution interleavings[23]. Before we generate a large number of test cases, we need to consider output files based on the four mutant operators used by CFIT. Each injected program contains a corresponding mutant specification in the form of a header file. It specifies mutants that have been injected into the program and supports the ability to enable one particular mutant through a mutant generator program. For example, if one mutant header file includes 8 mutants, after running the mutant generator program, 8 different versions of that program will be generated. For each version of the program, we created a set of valid test input values and command options with different numbers of threads ranging from 1 to 5.
For each of these test inputs, we assigned a thread interleaving by randomly selecting a set of program locations at the granularity of instructions. We randomly added yield points to these selected locations; this has a high probability of achieving
determinism[23]. A yield point is used to make a thread voluntarily suspend its execution. This creates an environment where *interleavings happen more frequently and under much greater control by the tester*. This is accomplished by injecting sleep functions for a finite amount of time so that the scheduler would pick other threads to run. This allows a tester to control thread interleaving.
Program G provides a simple example of how a yield point works. Between lines 2 and 3, a sleep function call is inserted as the yield point to cause the current thread (*thread A*) to suspend execution for one second. Thus, another thread (*thread B*) is scheduled while *thread A* is sleeping, resulting in a controlled interleaving. To further explore different interleaving patterns at runtime, we generated 10 test cases with different yield points for each mutant.
```plaintext
...
1. movl (count), %eax;
2. addl $1, %eax;
sleep(1000); // Yield point
3. movl %eax, (count);
...
```
**Program G**
The end result of this process is a relatively large number of test cases. For example, if we have 8 mutants for the RemoveUnlock pattern, there will be $8 \times 10 \times 5 = 400$ executions, where 8 is the number of mutants, 10 is the number of test inputs used for each mutant and the number of threads ranges from 1 to 5. Moreover, the number 400 here is just for one type of mutant operator; if each of 4 mutant operators can generate 400 test cases, there would be $400 \times 4 = 1600$ unique test cases generated for that program.
4.3 Study Operation
Figure 4.1 illustrates the process we used to generate and execute test cases on all of the faulty versions of each object program, with one mutant activated on each execution. The reason we activate only one mutant in each execution is to avoid fault-interactions and masking effects, and to allow us to accurately determine whether each mutant was indeed detected. The basic procedure is as follows: (1) CFIT generates a number of mutant files including mutant source files and corresponding mutant header files. (2) A mutant generator opens each mutant header file and then generates a new version of the program based on each specified mutant; these are then compiled. (3) TC Gen is the test case generation tool. Each test case consists of different yield point files generated by the yield point generator (YP Gen), a set of test inputs, a number of threads ranging from 1 to 5, and various command options for each object program based on different types of mutants. (4) We use Pin, which is a dynamic binary instrumentation tool,[19] to execute the test cases. (5) We then employ an algorithm based on wait-for graphs [34] to detect deadlocks. If a circular-wait condition is detected, the deadlock detector reports the program, the specific test case, the specific mutant, and the number of threads that result in that particular deadlock. Moreover, the system also creates an event log after each execution that can be used for further analysis.
4.4 Result
Table 4.1 lists our five concurrent programs and data on their mutants. Column 1 is the name of the program. Numbers of lines of code is listed in Column 2. Columns
3 to 6 report the numbers of mutants generated based on each mutant operator.
<table>
<thead>
<tr>
<th>Program</th>
<th>NLOC</th>
<th>Rm Unlock</th>
<th>Rm lock</th>
<th>Rm paired Lock and Unlock</th>
<th>Switch locks order</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>256</td>
<td>8</td>
<td>6</td>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>DIN.PHIL</td>
<td>104</td>
<td>5</td>
<td>4</td>
<td>3</td>
<td>0</td>
</tr>
<tr>
<td>AGET</td>
<td>846</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>752</td>
<td>12</td>
<td>11</td>
<td>10</td>
<td>1</td>
</tr>
<tr>
<td>BZIP</td>
<td>4232</td>
<td>10</td>
<td>11</td>
<td>9</td>
<td>142</td>
</tr>
</tbody>
</table>
Table 4.1: Mutant Data
Tables 4.2–4.5 report results regarding the effectiveness of the proposed framework in creating challenging but detectable mutants. Each table represents a particular fault pattern. In each table, Column 1 provides the name of the object program. Column 2 provides the number of mutants of that mutant operator generated by CFIT. Column 3 denotes how many deadlocks are detected after executing the mutants. Column 4 is the report of the mutation score for that type of mutant. The
mutation score is based on the ratio between the percentage of detected and injected mutants.
<table>
<thead>
<tr>
<th>Program</th>
<th>Rm Unlock</th>
<th>DLs Detected</th>
<th>Mutation Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>8</td>
<td>6</td>
<td>75%</td>
</tr>
<tr>
<td>DIN.PHILO</td>
<td>5</td>
<td>5</td>
<td>100%</td>
</tr>
<tr>
<td>AGET</td>
<td>2</td>
<td>2</td>
<td>100%</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>12</td>
<td>5</td>
<td>40%</td>
</tr>
<tr>
<td>BZIP</td>
<td>10</td>
<td>4</td>
<td>40%</td>
</tr>
</tbody>
</table>
Table 4.2: Remove Unlock
Table 4.2 is the result of the RemoveUnlock mutant operator. We can see that deadlocks occur in all of the programs (see Column 3). However, except for on programs DIN.PHILO and AGET, not all of the mutants are detected or killed. The mutation score for BBUF is 75%, and for both PFSCAN and BZIP it is 40%.
<table>
<thead>
<tr>
<th>Program</th>
<th>Rm lock</th>
<th>DLs Detected</th>
<th>Mutation Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>6</td>
<td>0</td>
<td>0%</td>
</tr>
<tr>
<td>DIN.PHILO</td>
<td>4</td>
<td>2</td>
<td>50%</td>
</tr>
<tr>
<td>AGET</td>
<td>2</td>
<td>0</td>
<td>0%</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>11</td>
<td>3</td>
<td>27%</td>
</tr>
<tr>
<td>BZIP</td>
<td>11</td>
<td>0</td>
<td>0%</td>
</tr>
</tbody>
</table>
Table 4.3: Remove Lock
Table 4.5 reports the results when we apply the Switch Lock Order mutant operator. Note that BBUF, DIN.PHILO and AGET do not have mutants of this type. In these three applications, there is only one lock statement in each block. For PFSCAN, only one mutant is generated and it is killed by our test cases. For BZIP, 94 mutants are killed out of 142.
Next we describe the results reported in Table 4.3 and Table 4.4. We can see that both the Remove Lock and Remove Paired Lock and Unlock mutant operators did
not cause deadlock to occur in BBUF, AGET and BZIP. Therefore, the experiment results show that two mutant operators, Remove Unlock and Switch Lock Order, can cause deadlock to occur more easily than mutant operators Remove Lock and Remove Paired Lock and Unlock. The reason for this is that removing an unlock can cause a thread to exclusively hold a resource without releasing it, resulting in circular waits. We also find that switching the order of two locks often results in circular wait. Although the mutants generated by Remove Lock and Remove Paired Lock and Unlock are hard to kill, deadlocks still occur during run time. The reason for this is that these two mutant operators can easily cause data races and data races are a potential factor that can lead to deadlock.
<table>
<thead>
<tr>
<th>Program</th>
<th>Rm lock</th>
<th>DLs Detected</th>
<th>Mutation Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>6</td>
<td>0</td>
<td>0%</td>
</tr>
<tr>
<td>DIN.PHIL</td>
<td>3</td>
<td>3</td>
<td>100%</td>
</tr>
<tr>
<td>AGET</td>
<td>2</td>
<td>0</td>
<td>0%</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>10</td>
<td>1</td>
<td>10%</td>
</tr>
<tr>
<td>BZIP</td>
<td>9</td>
<td>0</td>
<td>0%</td>
</tr>
</tbody>
</table>
Table 4.4: Remove Paired Lock and Unlock
<table>
<thead>
<tr>
<th>Program</th>
<th>Rm lock</th>
<th>DLs Detected</th>
<th>Mutation Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>0</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>DIN.PHIL</td>
<td>0</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>AGET</td>
<td>0</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>1</td>
<td>1</td>
<td>100%</td>
</tr>
<tr>
<td>BZIP</td>
<td>142</td>
<td>94</td>
<td>66%</td>
</tr>
</tbody>
</table>
Table 4.5: Switch Lock Order
To further evaluate our mutation approach, Table 4.6 lists the total numbers of detected mutants based on all 4 mutant operators and the number of detected mutants based on the percentage of test cases. For example, if a mutant is detected by all test cases, then it would be reported in the last column (80% - 100%). Due to the need to
produce meaningful results in experiments on testing techniques, seeded faults should be neither too easy nor too hard to detect[17]. If they are too hard to detect, then all mutants are not likely to be killed by any test cases, and they provide no ability to differentiate approaches. (Note, however, that some of mutants that cannot be killed may actually be equivalent mutants, which are mutants that behave equivalent to the base program.) Conversely, if mutants are too easy to detect, then almost any test cases can detect them, and they are likely to be detected by any testing technique, again providing no ability to differentiate approaches.
<table>
<thead>
<tr>
<th>Program</th>
<th>NMs</th>
<th>0.1-20%</th>
<th>20-40%</th>
<th>40-60%</th>
<th>60-80%</th>
<th>80-100%</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>6</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>DIN.PHIL</td>
<td>10</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>10</td>
</tr>
<tr>
<td>AGET</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>10</td>
<td>8</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>BZIP</td>
<td>98</td>
<td>34</td>
<td>3</td>
<td>9</td>
<td>1</td>
<td>51</td>
</tr>
</tbody>
</table>
Table 4.6: The total numbers of detected mutants based on all 4 mutant operators and the number of detected mutants based on the percentage of test cases
We now turn to our research questions. We first consider whether mutants are detectable (RQ1). Based on results reported in Tables 4.2 through 4.5, we can see that only in the cases of BBUF and AGET under the Remove Lock and Remove Paired Lock and Unlock patterns did mutants fail to cause deadlock to occur. In the remaining three programs injected mutants did cause deadlocks to occur. Although mutants under Remove Lock and Remove Paired Lock and Unlock for BBUF and AGET are not killed, we have not determined whether these are equivalent mutants or whether the test cases are not adequately constructed to reach them. We leave this analysis as future work. In summary, our experiment results show that a large proportion of the mutants generated by CFIT are detectable.
We now consider whether our mutants are not too easily detectable (RQ2).
4.6 reports that around 34% of all mutants fall in the category of 0.1-20% detection ratio. In this category, mutants are detectable but only by some test cases. As such, these mutants are detectable but detecting them can be challenging\(^1\). However, on DIN.PHILO and AGET, results are not encouraging, with all mutants being easily detected. We believe this is due to the problem discussed earlier, namely, most of the test cases for these programs are not strong enough to detect more difficult-to-detect mutants. Like RQ1, we will further investigate this issue as part of future work.
Finally, we consider whether our tool can operate efficiently (RQ3). As a preliminary evaluation, we measured the amount of time needed to inject 244 faults across all four fault patterns in all five applications. The amount of time needed was around 5 minutes. On the other hand, manual injection would likely take longer to perform the same task. As such, we conclude that the proposed CFIT is efficient.
<table>
<thead>
<tr>
<th>Program</th>
<th>DLs Detected</th>
</tr>
</thead>
<tbody>
<tr>
<td>BBUF</td>
<td>F</td>
</tr>
<tr>
<td>DIN.PHILO</td>
<td>T</td>
</tr>
<tr>
<td>AGET</td>
<td>F</td>
</tr>
<tr>
<td>PFSCAN</td>
<td>F</td>
</tr>
<tr>
<td>BZIP</td>
<td>F</td>
</tr>
</tbody>
</table>
Table 4.7: Deadlocks for Base Program
### 4.5 Discussion
Before we discuss our results, we also ran another experiment using the same test cases to run the base programs. The results are shown in Table 4.7. We can see that deadlock previously exists in DININGPHILOSOPHER. The other four applications have no detectable deadlocks prior to apply CFIT.
\(^1\)We used increment of 20% as previously used by [32].
We now discuss the results of our empirical study. With respect to mutation score, the results show that DININGPHILOSOPHER has the highest score across all fault patterns except for the Switch Lock Order pattern. As a reminder, the Switch Lock Order pattern is not applicable to this program. The scores for Remove Unlock and Remove Paired Lock and Unlock are 100%. For Remove Lock, the score is only 50% but yet, this score is still the highest when compared to those of the remaining four programs. According to the result showing in Table 4.6, we can see that all killed mutants are located in the 80–100% category, implying that all mutants generated by CFIT for DININGPHILOSOPHER are easily detectable.
DININGPHILOSOPHER was released by Oracle as a test program for its tool, Thread Analyzer. This particular tool can be used to analyze the execution of a multithreaded program. Typically, it can detect multithreaded programming errors such as data races and deadlocks in code that is written using the POSIX thread API, the Solaris thread API, OpenMP directives, or a mix of these[29]. As a test program, it already contain sources of deadlock before we injected it with mutants. As such, adding more mutants causes deadlock to occur even more easily, which is reflected in its high mutation score.
Next, we analyzed the mutation score of the remaining 4 programs. For Remove Lock and Remove Paired Lock and Unlock, only PFSCAN has mutants 4 out of 21 that can cause deadlock to occur. The remaining three programs do not have any mutants that can cause deadlock to occur. As we mentioned above, the reason most of mutants are not killed by our test cases is that some may actually be equivalent mutants. Other non-equivalent mutants may fail to be killed may be due to inadequate test cases.
For the Remove Lock or Remove Paired Lock and Unlock, we basically remove protection from critical sections. This can result in data races. It is quite possible
that data races can lead to deadlocks. We found 4 mutants that can cause deadlocks due to races.
Finally, we analyzed the mutation scores of Remove Unlock and Switch Lock Order patterns. Missing corresponding unlocks or incorrect lock orders are major fault patterns that can cause deadlocks in concurrent programs. This is because missing unlock operations can result in more mutually exclusive resources. Mutual exclusion is an important factor that can lead to deadlocks. Switching lock orders can also lead to more hold and wait instances in nested locking situations. According to Table 4.2 and Table 4.5, the experiment results indicate that mutants based on these two patterns are likely to cause deadlocks.
Chapter 5
Conclusion and Future Work
In this paper, we have presented a methodology for injecting mutations related to concurrency faults. We built an automatic concurrent fault injection tool (CFIT) based on an Eclipse plug-in for C/C++. In an empirical study, we evaluated our tool’s effectiveness by using it to seed various types of concurrency faults based on four fault patterns into five concurrent programs. Our results show that using the proposed concurrent fault injection tool (CFIT) is feasible as a basis for empirically evaluating testing techniques.
In future work, we intend to incorporate more mutant operators into CFIT such as Shift Critical Section and Modify Mutex. We also intend to extend our study of internal oracles to take other concurrency faults into account such as critical section violations and starvation. Finally, we intend to perform more empirical studies to evaluate the effect of equivalent mutants and non-equivalent mutants that are not killed in our work.
Bibliography
[35] Rodolfo Adamshuk Silva, Simone do Rocio Senger de Souza, and Paulo Sergio Lopes de Souza. Mutation operators for concurrent programs in mpi. In
[38] Leon Li Wu and Gail E Kaiser. Empirical study of concurrency mutation operators for java. 2010.
|
{"Source-Url": "http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1097&context=computerscidiss", "len_cl100k_base": 12451, "olmocr-version": "0.1.50", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 83298, "total-output-tokens": 15666, "length": "2e13", "weborganizer": {"__label__adult": 0.00033473968505859375, "__label__art_design": 0.00026679039001464844, "__label__crime_law": 0.0002899169921875, "__label__education_jobs": 0.0008363723754882812, "__label__entertainment": 5.501508712768555e-05, "__label__fashion_beauty": 0.00014126300811767578, "__label__finance_business": 0.00013685226440429688, "__label__food_dining": 0.0002598762512207031, "__label__games": 0.0005869865417480469, "__label__hardware": 0.0008740425109863281, "__label__health": 0.00037980079650878906, "__label__history": 0.0001926422119140625, "__label__home_hobbies": 7.510185241699219e-05, "__label__industrial": 0.0002751350402832031, "__label__literature": 0.00024259090423583984, "__label__politics": 0.0002313852310180664, "__label__religion": 0.0004246234893798828, "__label__science_tech": 0.00943756103515625, "__label__social_life": 9.042024612426758e-05, "__label__software": 0.00406646728515625, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00028586387634277344, "__label__transportation": 0.0004532337188720703, "__label__travel": 0.0001583099365234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59572, 0.05533]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59572, 0.41593]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59572, 0.88129]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 480, null], [480, 2053, null], [2053, 3745, null], [3745, 4174, null], [4174, 5681, null], [5681, 6633, null], [6633, 7386, null], [7386, 8661, null], [8661, 10734, null], [10734, 11457, null], [11457, 12555, null], [12555, 14010, null], [14010, 14949, null], [14949, 16028, null], [16028, 17636, null], [17636, 19173, null], [19173, 19202, null], [19202, 20472, null], [20472, 21626, null], [21626, 22747, null], [22747, 23297, null], [23297, 23942, null], [23942, 25038, null], [25038, 25546, null], [25546, 26667, null], [26667, 28552, null], [28552, 30455, null], [30455, 30536, null], [30536, 32344, null], [32344, 33374, null], [33374, 35108, null], [35108, 35268, null], [35268, 35334, null], [35334, 36137, null], [36137, 38024, null], [38024, 39568, null], [39568, 41223, null], [41223, 42440, null], [42440, 44200, null], [44200, 46160, null], [46160, 48274, null], [48274, 49884, null], [49884, 51848, null], [51848, 52564, null], [52564, 53566, null], [53566, 54317, null], [54317, 55740, null], [55740, 57275, null], [57275, 58775, null], [58775, 59572, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 480, null], [480, 2053, null], [2053, 3745, null], [3745, 4174, null], [4174, 5681, null], [5681, 6633, null], [6633, 7386, null], [7386, 8661, null], [8661, 10734, null], [10734, 11457, null], [11457, 12555, null], [12555, 14010, null], [14010, 14949, null], [14949, 16028, null], [16028, 17636, null], [17636, 19173, null], [19173, 19202, null], [19202, 20472, null], [20472, 21626, null], [21626, 22747, null], [22747, 23297, null], [23297, 23942, null], [23942, 25038, null], [25038, 25546, null], [25546, 26667, null], [26667, 28552, null], [28552, 30455, null], [30455, 30536, null], [30536, 32344, null], [32344, 33374, null], [33374, 35108, null], [35108, 35268, null], [35268, 35334, null], [35334, 36137, null], [36137, 38024, null], [38024, 39568, null], [39568, 41223, null], [41223, 42440, null], [42440, 44200, null], [44200, 46160, null], [46160, 48274, null], [48274, 49884, null], [49884, 51848, null], [51848, 52564, null], [52564, 53566, null], [53566, 54317, null], [54317, 55740, null], [55740, 57275, null], [57275, 58775, null], [58775, 59572, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59572, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59572, null]], "pdf_page_numbers": [[0, 120, 1], [120, 480, 2], [480, 2053, 3], [2053, 3745, 4], [3745, 4174, 5], [4174, 5681, 6], [5681, 6633, 7], [6633, 7386, 8], [7386, 8661, 9], [8661, 10734, 10], [10734, 11457, 11], [11457, 12555, 12], [12555, 14010, 13], [14010, 14949, 14], [14949, 16028, 15], [16028, 17636, 16], [17636, 19173, 17], [19173, 19202, 18], [19202, 20472, 19], [20472, 21626, 20], [21626, 22747, 21], [22747, 23297, 22], [23297, 23942, 23], [23942, 25038, 24], [25038, 25546, 25], [25546, 26667, 26], [26667, 28552, 27], [28552, 30455, 28], [30455, 30536, 29], [30536, 32344, 30], [32344, 33374, 31], [33374, 35108, 32], [35108, 35268, 33], [35268, 35334, 34], [35334, 36137, 35], [36137, 38024, 36], [38024, 39568, 37], [39568, 41223, 38], [41223, 42440, 39], [42440, 44200, 40], [44200, 46160, 41], [46160, 48274, 42], [48274, 49884, 43], [49884, 51848, 44], [51848, 52564, 45], [52564, 53566, 46], [53566, 54317, 47], [54317, 55740, 48], [55740, 57275, 49], [57275, 58775, 50], [58775, 59572, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59572, 0.16209]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
2219a29d3e45fbb47fa46bb2a945bb898235eb43
|
5-20-2005
Adapting the Single-Request/Multiple-Response Message Exchange Pattern to Web Services
Michael Ruth
University of New Orleans
Follow this and additional works at: http://scholarworks.uno.edu/td
Recommended Citation
This Thesis is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UNO. It has been accepted for inclusion in University of New Orleans Theses and Dissertations by an authorized administrator of ScholarWorks@UNO. The author is solely responsible for ensuring compliance with copyright. For more information, please contact scholarworks@uno.edu.
ADAPTING THE SINGLE-REQUEST/MULTIPLE-RESPONSE MESSAGE EXCHANGE PATTERN TO WEB SERVICES
A Thesis
Submitted to the Graduate Faculty of the University of New Orleans in partial fulfillment of the requirements for the degree of
Master of Science in The Department of Computer Science
by
Michael Ruth
B.S. University of New Orleans, 2002
May, 2005
This thesis is dedicated to
Rebecca, Anthony,
my family and friends.
I would like to thank Dr. Shengru Tu, my advisor, for providing me with the guidance needed to see this research project through to its completion, and of course, not allowing me to give up.
I would also like to thank Dr. Vassil Roussev and Dr. Nauman Chaudhry for being a part of my thesis committee.
I would also like to thank Rebecca, my girlfriend, without whom I would still be wearing white socks with dress pants.
I would like to thank Anthony as well, for providing me a constant source of amusement every minute we are together.
Also, my friends have my gratitude for both keeping me safe, and preventing me from doing anything that may keep me out of office in the wild world of New Orleans nightlife.
Lastly, and most importantly, I would like to thank each and every member of my family for their support and understanding throughout. Without them I would not even have survived long enough to finish this work.
# Table of Contents
List of Figures .................................................................................................................... vi
Abstract ............................................................................................................................. vii
Chapter 1: Introduction ........................................................................................................1
Chapter 2: Background ........................................................................................................5
Chapter 3: Related Works ..................................................................................................11
3.1 Multithreading ........................................................................................................11
3.2 Specific Asynchronous Protocols ..........................................................................12
3.3 Extensional Web Service Sub-Standard Protocols ..............................................12
3.4 Web Service Based Client-Side Listeners ..........................................................15
Chapter 4: Framework .......................................................................................................16
4.1 Ideal Solution Objectives ..................................................................................17
4.2 The Process: A Detailed Walkthrough ..............................................................17
4.3 Architectural Overview .....................................................................................21
Chapter 5: Implementation Details ....................................................................................28
5.1 Clearinghouse to Agent Communication .......................................................28
5.2 Generalization of Return Types ......................................................................29
5.3 Generation Utilities .........................................................................................29
Chapter 6: A Case Study ....................................................................................................33
Chapter 7: Performance Considerations ............................................................................37
Chapter 8: Conclusion ........................................................................................................39
References ..........................................................................................................................41
Vita .....................................................................................................................................43
List of Figures
Figure 2.1: Conceptual Web Services Stack ................................................................. 5
Figure 3.1: Example of WS-Callback SOAP Message .................................................... 13
Figure 3.2: Example of WS-Addressing SOAP Message ............................................... 14
Figure 4.1: Simplified Overview of Framework ............................................................. 16
Figure 4.2: Collaboration Diagram (Callback Agent) .................................................... 18
Figure 4.3: Collaboration Diagram (Polling agent) ....................................................... 21
Figure 5.1: Class Diagram of the CWS and Supporting Classes .................................... 30
Figure 6.1: Class Diagram of Agent and Supporting Classes ....................................... 33
Figure 6.2: Deployment Diagram of PO System ........................................................... 34
Figure 6.3: Schema Definition of PurchaseOrderConfirmation Return Type .................. 35
Figure 6.4: Service Provider Marshalling the Object into an XML String ...................... 35
Figure 6.5: Agent Unmarshalling the XML String into a POC object ............................ 36
Figure 7.1: Cost comparison diagrams ..................................................................... 37
Abstract
Single-Request/Multiple-Response (SRMR) is an important messaging exchange pattern because it can be used to model many real world problems elegantly. However, SRMR messaging is not directly supported by Web services, and, since it requires Callback to function it is hampered by current in-practice security schemes, such as firewalls and proxy servers. In this thesis, a framework will be proposed to support SRMR and Callback in the context of Web services and the realities of network security. The central component of the proposed solution is a Clearinghouse Web service (CWS), which serves as a communication proxy and realizes the correlation of responses with requests. One and only one CWS will be needed per enterprise that wishes to handle any number of SRMR Web services and their respective clients. Using the framework and related code generation utilities, a non-trivial case study, a Purchase Order System, has been implemented.
Chapter 1: Introduction
Web services have become the de facto means to enable business-to-business (B2B) applications. Web services are interoperable building blocks enabling business process automation and integration across organizational as well as departmental lines [1]. Due to many efforts by researchers in academia and the industry to enhance their functionality, Web services are in the process of outgrowing the synchronous request-response model and emerging as a flexible distributed computing platform in which the asynchronous model, as well as more complicated messaging patterns, such as Single-Request/Multiple-Response messaging, take an important role.
In the synchronous model, when the client calls the server the client is blocked waiting for the result. In contrast, in the asynchronous model the client is not blocked waiting for the result. When the result of the call is produced it will be returned to the client at some later time. Specifically, this asynchronous model is Callback, a fundamental pattern for the realization of asynchronous, loosely-coupled interactions.
The Single-Request/Multiple-Response message exchange pattern (SRMR) refers to message passing in a conversational manner. In the request/response message exchange pattern, there exists a one to one correlation between the request and response. In SRMR messaging, each request may result in many responses. SRMR messaging inherently requires the asynchronous model to operate because after the first response, the rest of the responses have to be asynchronous. Also, message correlation is required, since there are multiple responses to a single request, each of the responses must be correlated to the request that generated it.
In B2B applications, an extra dimension has been added in the realm of distributed computing by the proliferation of ever increasing security measures. In order to realize the asynchronous model, specifically callback, the caller must be accessible from the external service provider. In typical enterprise installations, common security measures such as firewalls and proxy servers often prevent the client from being accessible from the outside. These measures severely handicap any implementation of the asynchronous model.
Recently, Web service composition and choreography, which attempt to support any interaction model [2], have been a strong trend in research and development. Web service composition and choreography assume that every business application is a Web service. However, the communication between asynchronous Web services and their client applications in the context of network security has been largely overlooked. Many business applications need to utilize available Web services, but may not necessarily need to be exposed as Web services themselves. Promoting every application into a Web service may circumvent the firewall barrier, but, such a practice is not feasible in many environments due to security concerns. Most business processes should never be exposed to the outside of the enterprise.
The focus of this thesis is on SRMR messaging in the context of both Web services and the realities of network security. Its importance lies in its applicability to many real-world problems. For example, a purchase order request may result in multiple sales deals, a document request may obtain multiple files, and a large dataset may have to be broken into more manageable pieces. These problems can be modeled elegantly with the SRMR message exchange pattern. In this thesis, the Web services that provide services using the SRMR model are simply called SRMR Web services.
An application-level framework to facilitate the use of the SRMR message exchange pattern will also be proposed in the context of Web services and the current in-practice security measures. This framework uses software design patterns at the following scalability levels: object, system, enterprise, and global [3]. The actual patterns that were used in the design process are Observer, Mediator, Proxy, Memento, Abstract Factory, and Strategy [4], Half-Object Plus Protocol, Router [5], and Gateway [3].
The center piece of this framework is a Clearinghouse Web services (CWS) which serves two distinct roles: (1) a proxy between the service provider Web service and the client application, and more importantly, (2) a message manager that realizes the correlation between the responses coming from the service providers and the clients' requests. In this framework, one and only one CWS will be required for any enterprise that wishes to handle any number of SRMR Web services and their respective clients. In order to do so, the proposed CWS will be capable of handling any type of response from external Web services. The framework consists of the CWS, a set of client-side helper components, and a suite of code generation utilities. Using this framework, and the code generation utilities, a Purchase Order System was implemented to provide an example of the interactions between a client of the framework and the involved Web services.
The remaining part of this thesis is organized as follows: Chapter 2 provides background information regarding Web services, their supporting technologies, and network security. Chapter 3 highlights some of the related approaches that support callback and the SRMR message exchange pattern. Chapter 4 will outline some objectives for an ideal solution, then present both a detailed walkthrough and an architectural overview of the developed framework, including design decisions made in the development of the framework. Chapter 5 will discuss
some of the more important design decisions in depth and implementation details involved in the development process. Chapter 6 will describe a non-trivial case study, a purchase order system, in detail, while providing further details of the framework’s development. Chapter 7 will discuss some performance considerations, and finally, Chapter 8 concludes.
Chapter 2: Background
Broadly, Web services refer to self-contained web applications that are loosely coupled, distributed, capable of performing business activities, and possessing the ability to engage other web applications in order to complete higher-order business transactions, all programmatical accessible through standard internet protocols, such as HTTP, JMS, SMTP, etc. More specifically, Web services are Web applications built using a stack of emerging standards that together form a service oriented application architecture (SOA), an architectural style whose goal is to achieve loose coupling among interacting software components through the use of simple, well defined interfaces [6]. Also, in [6], a stack of emerging standards on which Web services are built were described, and will be summarized here. Figure 2.1 shows a conceptual overview of the Web Services stack.

Extensible Markup Language (XML) provides the basis for most of the standards that Web services are based on. XML is a standard that was developed by the World Wide Web Consortium (W3C) [7]. XML is a text-based meta-language for describing data which is extensible and therefore used to define additional markup languages. The mechanism with which a markup language is defined in XML is termed a schema definition. A schema definition is a set of rules that define the structure and content of an XML document. Since XML is text-
based and extensible, it provides the standard on which other standards are built in the realm of Web services.
The lowest layer of the Web services stack, the network layer, is defined since a Web service must be network accessible to be invoked by its clients. Although Web services are typically thought of as operating over HTTP, they are capable of operating over many different types of transport layers, such as HTTPS, Java Message Service (JMS), and even SMTP, providing a great deal of flexibility to application developers. Although, just about any internet traversable transport layer can be used underneath Web services, HTTP is by far the most commonly used Web service transport.
The next logical layer in the stack is the messaging layer, and its related standard is SOAP [8]. SOAP defines a common message format for use by all Web services. SOAP is designed to be a lightweight protocol for information interchange among disparate systems in a distributed environment. The actual format consists of an envelope that defines what contents the message contains and how to process it. In the envelope are a number of standard headers, and a body. SOAP is entirely encoded in XML. The very minimum requirements of a service provider or consumer of Web services are to be able to build, process, and send (over the network layer) these SOAP messages.
The layer above the messaging layer is the description layer, and its specification is the Web Service Definition Language (WSDL) [9]. It provides a mechanism for describing Web services in a standard way. The description provides an interface for using the Web service, in terms of available operations, in terms of their name, parameters and return types. The description binds a service, termed abstract endpoints in the specification, to concrete endpoints, which is a description of the service defined abstractly then bound to a concrete network protocol
and message format. This description is represented using XML as well. This layer is the key element that gives Web services their loose coupling allowing for a new level of interoperability, platform and language neutrality.
The highest layer of the protocol stack is the discovery layer, and is modeled by the Universal Description, Discovery and Integration (UDDI) [10] specification. UDDI provides a means to locate and use Web Services programmatically. Service providers publish high level descriptions of their Web services into a UDDI repository, with which their services can be looked up and used. When an application wants to use a service published in the repository it downloads what the application needs to connect to and consume the Web service it found in the repository. These standards have addressed the connectivity, messaging, description, and discovery issues for Web services, providing the simple, well-defined interfaces required for the loosely coupled, interoperable building blocks known as Web services.
Not only have these standards led to numerous software tools to aid in development of Web services, they have also supported numerous sub-standards. Sub-standards in the context of Web services are typically vendor contributed extensions to the SOAP protocol. Since most of the sub-standards that will be discussed in this thesis have not yet become actual standards, they: (1) exist only as draft specifications, (2) do not yet enjoy industry wide support, and (3) do not have reference implementations provided by the vendors. Sub-standards which do not have reference implementations means that if a developer wishes to use the additional functionality provided by it, the developer must develop the code to perform the functionality. This adds additional complexity to any Web services project, because now, not only does the developer have to maintain their own service, they must also maintain the code created to provide the functionality provided by the sub-standard. In other words, if a sub-standard undergoes changes
on its way to becoming a recommendation, the application may have to be adapted to fit the new changes. Additionally, some sub-standards are conflicting, created by different vendors to accomplish the very same goal. In this confusing environment of sub-standard proposals, developers are forced to either choose a sub-standard that may not be adopted or may continue to be only supported by a few vendors and implement it, or wait until one of the sub-standards becomes an actual standard.
In the realm of network security, firewalls, proxies, and DMZs [11] are commonly used security measures in enterprise networks. Firewalls provide the means with which organizations protect their computer resources from outside networks. They block packets that are received either from untrusted networks, or from an inside source that is trying to request information from disallowed domains. A DMZ (Demilitarized Zone) is a pair of firewalls working together to form an area, which is separated from internal and external networks, but is logically a part of each. The publicly accessible servers such as Web servers, servlet engines and proxy servers are placed here. Proxy servers provide the means for enterprise networks to prevent outsiders from accessing inside computers by making internal applications (Web service clients) anonymous. If a proxy is in use, all calls for specific protocols will be routed through it, and when the outside service receives the call, the sender's address is that of the proxy and not the client.
Regarding Design patterns, Christopher Alexander, an architect, once said “Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over with out doing it the same way twice”. [12] Even though, as an architect, Alexander was referring to the architecture of buildings and such, his central idea, patterns, was applied to software design as well, thus, software design patterns describe recurring general software design
problems and a proven solution to those recurring problems. Software design patterns allow
novice developers access to the best practice of more seasoned developers, as well as providing a
common vocabulary for developers to discuss their designs with.
The patterns that were used to develop the framework at the core of this thesis are
Observer, Mediator, Proxy, Memento, Abstract Factory, Flyweight, and Strategy [4], Half-
Object Plus Protocol, Router [5], and Gateway [3]. Observer defines a one to many dependency
between objects so that when one object changes state, all its dependents are notified and
updated automatically. Mediator defines an object that encapsulates how a set of objects
interact. It promotes loose coupling by preventing objects from referencing each other explicitly.
The Memento pattern defines a manner in which an object’s internal state can be captured and
externalized without violating encapsulation. The Abstract Factory pattern provides an interface
for creating families of related objects without specifying their concrete classes. The Flyweight
pattern is used to support large number of fine-grained objects efficiently. Half-Object Plus
Protocol provides a mechanism to allow a single entity, or relationship, to exist in two or more
address spaces. The Router pattern allows for decoupling multiple sources of information from
the targets of that information. Strategy defines a family of algorithms, encapsulate each one,
and make them interchangeable. Gateway provides seamless interoperability between two
disparate systems, domains, or object model.
Software design pattern scalability refers to the scope at which the pattern is applied. The
scalability model in [3] defines several architectural levels corresponding to the scope of
software solutions. By defining such a model, the field of design patterns can be expanded to
apply to larger levels of abstraction. At the bottom of the model is the Object level, which refers
to the interaction between objects. The next level is the MicroArchitecture level and it refers to
interaction between groups of objects, solving larger problems. At the third level, MacroArchitecture, this is focused on the development of software frameworks. The next level is the Application architecture, which refers to the organization of applications developed to meet a set of user requirements. The system level deals with communications and coordination between applications and sets of applications. The enterprise level is focused on coordination between groups of systems within a single organization. And finally, the global level deals with design issues applicable across all systems, inside or outside the organization.
Chapter 3: Related Works
In the context of Web services, SRMR messaging has not been an active research topic on its own, and is usually associated with asynchronous messaging. Included in the WSDL 1.2 specification are two message exchange patterns, Out-Multi-In and In-Multi-Out that, if applied together, are SRMR messaging. The W3C working group assigned to WSDL removed this message pattern in the WSDL 2.0 specification to prevent confusion with multicast-capable patterns [13]. In Web services architecture, the role of WSDL is to serve as a low-level description language, just enough to specify the interface of every operation. Logic should not be built into WSDL, which has been a deliberate tactic of the W3C WSDL working group, in order to have WSDL remain a robust standard language. Applications that require this interaction will have to solve this issue at application level. Unlike the SRMR messaging pattern, its requirement, Callback, has been an active research topic and the relevant approaches towards that goal can be logically divided into the following four categories:
3.1 Multithreading
The client-side multiple threads approach [14] suggests that for each synchronous call by the application, we activate a thread to maintain the connection for the call and to wait for the response from the server. The central idea of this approach is to defer the waiting for responses from synchronous calls to the service provider to threads which after receiving the result pass the result back to the application. While this approach unblocks the calling application's control flow, the threads do not release the connections to the service provider, even for long duration transactions. Relying heavily on maintaining a connection to the service provider is the major
liability of this approach. If the connection is lost, the solution fails. Also, this approach does not support resumable clients, which are clients that wish to shut down, restart, and resume operations.
### 3.2 Specific Asynchronous Protocols
The approach developed by Holt Adams at IBM involves using asynchronous transports to perform the needed asynchronous calls and the use of threads to perform the asynchrony [15]. The major drawback of this approach is that would force the developer to use specific asynchronous protocols such as SMTP or JMS, and such a requirement may not be feasible for some applications and environments. A characteristic of this particular approach is the total transparency of the underlying callback to the client. While this makes its application easier to implement, it is also a drawback, because it does not release the calling client from being blocked. The client would have to use multiple threads to unblock its control flow.
Another closely related work is the Web Service Invocation Framework (WSIF), which IBM initially developed, and later donated to the Apache XML project [16]. WSIF is a client API that invokes web services using a local proxy. WSIF can support Web service callbacks but requires the use of JMS as the underlying transport layer, which is a serious limitation to most applications, and may not be feasible in others.
### 3.3 Extensional Web Service Sub-Standard Protocols
This approach involves the use of extensional Web service sub-standards, which are typically SOAP extensions, developed for specific problem domains. BEA developed the powerful “WS-Callback” protocol [17]. It is a SOAP based solution that defines “standard” new
headers in the SOAP messages that the requestor can use to dynamically specify where to send asynchronous responses to a SOAP request. WS-Callback does not have built in support for message correlation, and this poses two problems. First, the responses will have to be sent directly to the waiting application, which in a secure environment is impossible. Second, without message correlation, WS-Callback cannot be directly applied to SRMR messaging, because the application which makes a number of similar calls reliably would have to decide which partial response is a result of which request, which may not be feasible. WS-Callback can be extended to handle message correlation, but any attempt to do so would deviate from the sub-standard. Any deviation from the sub-standard would have to have support on both ends of the service provider/service requestor chain, thus greatly complicating its deployment. Ensuring that all parts of the chain support all requirements of the system may not be feasible, especially in the context of external Web services in which the external entity will not be under the control of the developer. Figure 3.1 shows an example of SOAP messaging using WS-Callback extensions.
```
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Header>
<wscb:CallBack
xmlns:wscb="http://www.openuri.org/2003/02/soap/callback/"
s:mustUnderstand="1">
<wscb:callbackLocation>
http://merres1.cs.uno.edu/axis/WSClearinghouse
</wscb:callbackLocation>
</wscb:CallBack>
</s:Header>
<s:Body>
...
</s:Body>
</s:Envelope>
```
Figure 3.1 Example of WS-Callback SOAP Message
WS-Addressing, currently being proposed, allows a service request to pass a "reply-to" address of a callback listener to the operation call [18]. WS-Addressing also supports message correlation. This sub-standard cannot be considered a final solution, because in order to pass the
result from the service provider to application, the service provider must know the address of the
final recipient and be able to reach the final recipient directly. Some entity may act as a proxy
for the return path, but since there is no routing information described in the WS-Addressing
specification, the entity would have to be an application-level gateway, very similar to the one
being proposed in this thesis. In other words, WS-Addressing may be used in conjunction with
the framework being proposed, but not instead of the framework being proposed. Figure 3.1
shows an example of SOAP messaging using the extensions provided by WS-Addressing.
```
<S:Envelope xmlns:S="http://www.w3.org/2003/05/soap-envelope"
xmlns:mrms="http://merres1.uno.edu/">
<S:Header>
<wsa:MessageID>1</wsa:MessageID>
<wsa:ReplyTo>
<wsa:Address>
http://merres1.uno.edu:8080/axis/CWS
</wsa:Address>
<wsa:ReplyTo>
...</S:ReplyTo>
</wsa:Address>
...</wsa:ReplyTo>
</S:Header>
<S:Body>
...</S:Body>
</S:Envelope>
```
Figure 3.2 Example of WS-Addressing SOAP Message
Also, more ambitious specifications, such as BPEL4WS [19], WS-CDL [20], and BTP
[21], have been proposed to handle not only callback, but choreography and orchestration for
Web services as well. These specifications describe the behavior and relationship of business
processes and their partner business process by defining standard ways to compose these basic
Web services into larger composite business processes. The major drawback involved in using
any of these sub-standards, is that for any message the service provider wants to send to the
client directly, the client must be accessible from that service, which in a secure environment is
impossible. Also, since most of these are experimental or draft specifications, to use these sub-
standards would require hand-coding.
3.4 Web Service Based Client-Side Listeners
The central idea of this approach is to have the client either create a listener, or become a listener itself. If one were to use a listener application accessible from the outside for each client application, then callback would truly be supported. This approach is described in [22]. While it is necessary to have a listener Web service for accepting the callback messages from the servers, deployment and management on a one listener service per application basis would be excessively costly, considering the client-side applications are bound to be numerous and volatile.
Also, related to this approach is the Faux Implementation Pattern in [23]. This approach suggests that we have the client application pretend to be a Web service, receiving and processing SOAP messages, on the client-side. The client itself would expect the callback response from the server. This would also be a true callback pattern, if and only if, the listener is reachable from the service provider, but in a secure enterprise environment it will not be accessible.
Chapter 4: Framework
The proposed client-side framework for utilizing SRMR Web services is composed of two major components. As mentioned earlier, the key component of the framework is the Clearinghouse Web service, CWS and other component is an agent which is used directly by an application that wishes to consume SRMR Web services. The agent component is instantiated by the application, and is therefore, running locally for each application. Figure 4.1 illustrates a simplified overview of the architecture of the framework. This diagram attempts to emphasize that many applications in a given secure enterprise through their corresponding agents may interact with different Web services while sharing the same CWS.

This chapter will first highlight the objectives of an ideal framework that supports SRMR messaging, and then give a detailed walkthrough of the system, and finally, discuss each of the
major components of the developed framework. In the discussions, particular attention has been paid to optimize the design by applying design patterns where appropriate.
4.1 Ideal Solution Objectives
An Implementation of SRMR messaging by itself is trivial, but in the context of Web services and enterprise network security its implementation is anything but trivial. Using the related works as a guide, an ideal framework developed to allow client applications to utilize Web services that support SRMR messaging should support the following features:
• Unblock clients after making a successful call.
• Release the server immediately after the service accepts the initial call.
• Minimize the management of listener services for all applications using the framework.
• Avoid the inherent complexity involved when using Web service sub-standards.
• Allow any underlying communication protocols instead of requiring any specific one.
• Support resumable clients for long duration transactions.
• Shield the complexity of using SRMR messaging from client applications.
• Maintain the level of interoperability provided by Web services, making the solution both platform and language neutral.
4.2 The Process: A Detailed Walkthrough
A detailed walkthrough of the process that takes place when an application makes a call to a supported Web service through the framework with brief explanations along the way is described in this section. In this walkthrough, a SRMR Web service will be denoted as simply a server. An application must first instantiate an agent object, if it does not already exist, before it
attempts to invoke an operation of the server. This agent object will be used by the application as a proxy to the involved Web services. The application will make calls through the agent and receive results through the agent, thus shielding the complexity of the framework from the application developer. This is an application of the Proxy pattern. The collaboration diagram shown in Figure 4.2 describes the process, which is performed by the actors in the collaboration carrying out the following four multipart steps:
In step 1.1, the application passes the call to the agent along with all the parameters needed to make the call, and a notification style for that request. The notification style sets the notification strategy that the framework will use to notify the application. For instance, an
application may only want to know when all responses have been received for local pickup, or an
application may want to know about each individual response.
In step 1.2, the agent calls the corresponding operation on the server on behalf of the
client. This call takes as parameters, the parameters that were just passed to the agent by the
application, and a Server Observer object. This object holds the address information (the URI)
of the CWS where the responses should be sent. An important point to note at this point is that
at no time is the clients address information sent outside the firewall, which is done not only to
prevent giving outsiders more information that they need, but also because the outside service
cannot use the information to send the callback directly because of the firewalls in place.
In step 1.3 the server receives the call, performs a validity check of the parameter list,
ensuring that the parameters meet the requirements of the service contract. If the call is valid,
the server returns the correlation identifier, or CID, and the number of responses the server will
eventually send. The agent maintains a counter to track messages that have been received and
not yet received, based on the response counter. This two part response is completed using a
Multiresponse object that simply carries the two values.
In step 1.4a, the agent registers at the clearinghouse, a call which takes four parameters:
The CID and response count, which were just received from the server, a Client Observer object,
and the notification style it received from the application in step 1.1. The Client Observer object
holds the address information of the agent object, in the form of hostname and port. In step 1.4b,
at some point, the server finishes processing the request and begins returning responses to the
CWS. These calls to the CWS from the server are synchronous, so that if calls are not received
properly, they may be resent. The number of calls that the server will make to the CWS will be
given by response count, and a response counter is kept at the CWS to manage multiple response
correlation. Each response in step 1.4b consists of a partial payload and the CID that the server sent to the agent in step 1.3.
In Step 1.5a, the agent returns the CID to the application. In step 1.5b, which occurs when an agent is registered for a result with a certain CID and a response is received for that CID, and according to the notification style strategy, it is time for a notification, the CWS informs the agent that registered for that CID, this notification takes the form of a string passed over a TCP/IP socket. This socket connects since the CWS exists in the client-side DMZ, and therefore can reach the agent. The agent may be notified anywhere between once and response count number of times, depending on the notification style in place for that CID.
In step 2, the agent queries the CWS for the results, and receives all responses that have been received up to that point by the CWS. In step 3, the agent notifies the application, using the object level observer pattern, that results are ready at the Agent.
Finally, in step 4, the application queries the agent for results, and receives all responses for that CID that the agent has received up to that point. Note that steps 1.4a and 1.4b may happen concurrently, as well as steps 1.5a and 1.5b. Note also, that in steps 3 and 4, every time the agent is notified of results by the clearinghouse, it notifies the application, queries, and receives the responses, so these steps also follow the pattern set forth by the notification style, which results in between one and response count notifications.
In the above process, the agent and application are related via the Observer pattern. The application may also poll the agent for specific results. The process involved when an application polls the agent for results is somewhat different than the process for when the agent and application are related via the Observer/Observable pattern. The collaboration diagram for
this process shown in Figure 4.3 describes the process involved when an application polls the agent for specific results.

When the application uses the polling agent process shown in figure 4.3, steps 2, 3, and 4 are different. Just after receiving the CID from the agent, in step 2a, the application starts polling the agent. When the result of those polls returns true, it gets the result as it did in step 4. This interaction was also included in the developed framework to allow a higher level of flexibility for those applications who wish to consume these types of services through the use of the developed framework. An interaction is chosen when the application instantiates the agent. The agent instantiates a polling agent using a constructor with zero arguments, or a callback agent using a constructor that takes an Observer as an argument.
4.3 Architectural Overview
The framework consists of two major components and several related helper objects. The components and their helper objects will be discussed in terms of their responsibilities, functionality, and the design patterns used to construct them. The components and their helper objects are as follows:
Clearinghouse Web service (CWS) As mentioned earlier, this is the key component of the developed framework. The use of a clearinghouse for centralized correlation processing was proposed for CSP-like communication in [24]. This clearinghouse of this framework is similar; it is used to centralize the correlation of messages, but also handles the message distribution. The decision to use a centralized listener Web service (the clearinghouse) rather than having a single listener Web service per application as [22] is based on a number of considerations. First, the central listener service decouples the timing coupling relationships between the calling applications and the callback services. The CWS accepts response messages for agents, therefore applications, that may, or may not, be active. This timing decoupling allows the framework to support resumable clients. Second, management of a single CWS per enterprise is much simpler than the management that is required to create and manage a listener Web service for each application and agent pair, which as mentioned earlier can be prohibitively costly.
The CWS component consists of four logical operations: registration, deregistration, send-result, and fetch-result operations. The agents use the registration operation to inform the CWS what message identifiers, or CIDs, that they are interested in. The agents use the unregistration operation to inform the CWS that they are no longer interested in receiving notifications for a message identifier until it re-registers. The fetch-result operation is used by
the agents to actually get the results from the CWS. Finally, the send-result operation is utilized by the server to return the results of the clients' calls.
When an agent registers at the CWS by calling the registration operation, the CWS stores the response count and the notification style as well as the correlation information (the agent’s address and the expecting CID) into the clearinghouse database. Note, to prevent any possible identical CIDs produced by different service providers, each stored CID is concatenated with the service provider’s URI. When the service providers call the send-result operation to deliver response messages, the CWS stores the payload according to its CID. After both of these operations, the CWS checks for matching registration and payloads based on their CID. If a match is found, it uses the notification style to determine if a notification is necessary, if it is, it immediately notifies the client about the arrival of the results. Once informed, the client is free to pick up the messages from the CWS, by calling the fetch-result operation. The CWS will return all the correlated messages that the CWS has received thus far for the CID. Once the response count for a CID drops to zero and the response messages for this CID have all been fetched by the corresponding agent, the CWS will purge the corresponding registration entry from the clearinghouse database.
The final operation, the deregistration operation, is provided to allow the applications to disconnect from the CWS temporarily. No notification will occur for those CIDs that the agent initially registered for, until the client resumes and re-registers. This functionality is provided simply by removing the agent’s registration entry from the correlation table used by the CWS. This prevents matches, and thus notifications, from occurring, but does not interrupt the flow of messages containing payloads from service providers. These payloads are stored until a match occurs, and that will not occur until the agent reregisters.
A number of design patterns were applied in the design of the CWS component including Observer, Router, Proxy, Gateway, Abstract Factory, Half-Object Plus Protocol, and Strategy at different levels of scalability. The CWS uses the Strategy pattern at the Object level to implement the handling of notifications based on their notification style. The Abstract Factory is also applied at the Object level to create the handler strategies. The agent and the CWS use the Observer pattern at the enterprise level. How the CWS operates is best described using the Router, Proxy, and, Gateway patterns.
The router pattern was applied in the sense that it routes the messages based on CID, clearly content-based routing, to the appropriate agent. This is necessary among many clients and many servers, all interacting through the same CWS. The Proxy strategy is also used to return messages from the server to the appropriate agent through the CWS. The CWS also follows the Gateway pattern in transmitting messages, since the server cannot call the agent directly due to disparate domains. The Router, Proxy, and Gateway patterns are applied at the global level.
The CWS utilizes two helper classes:
- **PayloadHandler** – This object is used by the CWS to determine what action to take upon receiving a set of messages based on notification style for the CID in question.
- **ClearinghouseObserver** – This object allows the CWS to seamlessly use the Observer pattern. It acts as a half-bridge between the agent and CWS to actually send the notifications to the agent. The CWS uses this as an object level Observer pattern, and when the CWS needs to notify the client, it notifies this object, which in turn serves as a proxy to notify the agent's helper. This is how the “Half Object Plus Protocol” pattern is applied. To eliminate unnecessary notifications, and to reduce overall network traffic, the CWS will not notify the
same agent about the same CID again until the agent takes an action upon receiving the previous notification.
**Agent** The purpose of the agent component is to shield the application from the complexity of the framework by serving as a proxy. Every application that uses our framework instantiates an agent object to in order to interact with the framework. If the application prefers the Observer pattern, the application passes a reference to itself in the agent's constructor. If it would prefer to poll the agent, it would use a zero argument constructor. In either case, the agent would still support applications that may need to shutdown temporarily. The decision to support multiple types of agents was made to breed in flexibility regarding how the application will consume the results of the Web services.
The patterns used in the development of the agent are Observer, Proxy, Half-Object Plus Protocol, and Memento. The Proxy pattern is applied to model the end to end nature of the agent handling the call, and finally returning the results. The Proxy pattern would then be applied at a global level. The Observer pattern is applied to model the interaction with the CWS at the enterprise level. As mentioned earlier, they are related via the Observer/Observable relationship, with the agent performing the Observer role. The agent can be related to the application using the Observer pattern at object level as well, in which case it becomes the observable part of the interaction. Lastly, the agent also takes part in the Memento pattern at object level, when it deregisters, saves its state persistently, and shuts down, so that at some later point, it can be re-instantiated using the saved state, reregister at the clearinghouse, and, therefore resume operating where it left off.
Once the agent registers at the clearinghouse, the agent informs its helper to start listening for a notification from the clearinghouse. Once the notification is received in the form of a string, the agent parses it to get the CID. The agent then calls the fetch-result operation of the CWS using this CID. This call will return all the responses that have arrived at the CWS associated with that CID since the last retrieval by this agent. As mentioned earlier depending on the relationship between application and agent, there are two paths this interaction can take. Using the Observer/Observer relationship, the agent notifies the application that results were received by using an object level notify passing the CID in the notification. Using the other relationship, the agent’s poll would return true. The application would then get the results from the agent. The agent then updates its count of the returns. Upon returning all messages to the application for a given CID, the agent will then free all resources associated with that CID.
The agent utilizes only one helper class:
- **AgentHelper** – This helper object is created upon instantiation of the agent. It serves as the other half of the bridge that allows the agent and the CWS to be related via the Observer pattern. The "Half Object Plus Protocol" pattern is applied at enterprise level to provide the illusion that the agent is a local object of the CWS. The agent informs the helper when listening is necessary, and the helper informs the agent when notifications arrive from the CWS. The helper and agent are also related via Observer/Observer relationship at the object level. It performs its duties by managing the sockets for the agent. Once the server socket accepts a connection from the clearinghouse, the helper receives the message (the CID string) and passes the string to the agent.
**Shared Helper Classes** The following helper objects are used by multiple components of the framework. The pattern they observe is the Flyweight pattern. These objects are shared in the sense that they are passed from one component to another and provide no functionality other than the information that they contain within. The three shared helper classes are:
- **ClientObserver** – models the hostname and port number of the agent’s helper. It is passed from agents to Clearinghouse, and is used by the CWS to create a ClearinghouseObserver.
- **ServerObserver** – models the URI at which the CWS is listening for incoming responses from remote servers. It is passed from agents to the service provider.
- **MultiResponse** – models the response count and CID of a valid request. It is passed from the service provider to the agent as the result of a valid call.
Chapter 5: Implementation Details
In this chapter, implementation details important to the design of the framework shall be discussed along with an overview of the generation utilities. The two major design decisions that relate to the design process that will be discussed are the communication between the CWS and agent, and the generalization of the return type to allow for only one CWS to handle any type of return. The discussion of the framework generation utilities will follow the discussion of the two major design decisions.
5.1 Clearinghouse to Agent Communication
Since the CWS and the agents exist in different environments, the communication between them should be carefully considered. When the CWS receives a result, for which a notification should be sent, depending on the notification style in force, it should notify the agent as soon as possible. This notification is performed using a callback mechanism to minimize latency. The CWS then should send the CID to the agent.
Furthermore, the communication means the CWS uses to notify the agents is socket-based. Sockets were chosen for both their simplicity and their interoperability. Sockets are an interoperable choice, because Sockets exist in every modern operating system, and are supported by every modern programming language. When a CWS wishes to notify an agent, the CWS, by way of a helper, opens a socket, and passes the CID in the form of a string to the agent. The CWS will not notify the same agent regarding the same CID again until the agent takes an action upon the previous notification. This is done to eliminate unnecessary notifications and reduce overall network traffic.
5.2 Generalization of Return Types
In order to support any number of service providers, the CWS must be able to handle any type of response. More specifically, the return type should be generalized into an acceptable form so that only one CWS will ever be needed per enterprise. This generalization process should be language neutral, and should be unambiguous in the manner it specifies return types and objects. XML provides the vehicle for which all these objectives can be achieved. Using XML, and schemas, the response type is generalized into a schema definition, thus providing a language neutral type definition. Using this definition, the service provider would then serialize the response into an XML string, and pass the string to the CWS. The CWS can handle the return type String, because String is a simple type in any schema definition. When the agent obtains the String as a result of a call to the CWS, the agent uses the same definition the service provider used to deserialize the XML string into an object the agent's language can understand. This definition could be retrieved by the application from the UDDI repository entry for the service provider, or sent as requested by the service provider. A specific example of how this return type generalization is performed will be discussed in detail in Chapter 6 along with the discussion of the case study.
5.3 Generation Utilities
The code generation utilities, along with the generated reference implementation, were developed in Java using only the following non-standard API libraries: Apache Axis [25], Java API for XML Binding (JAXB) [26], and JavaDoc [27]. Axis provides client-side interfaces for connecting to and consuming related Web services. JAXB provides the mechanism with which
the result types are generalized using XML schemas. JavaDoc is used by the generation utilities to gather information from Java interfaces to create the necessary Web services.
The generation of the individual pieces of the framework is done in two parts. First, the client-side CWS is created, and then the Agent is generated afterwards. It is performed in two parts, since for each enterprise only one CWS will be needed regardless of how many different web services it may use. The remote services were not generated because they are thought to already exist. The generation toolkit consists of two separate utilities, one to create a CWS, and another to create the Agent and helper classes that an application may use to interact with the service provider that uses the SRMR message exchange pattern.
The CWS has two groups of operations: three operations for agents, and one for the service providers. In the following class diagram of the CWS and supporting classes shown in figure 5.1, the client-side connection code generated by Axis has been left out for clarity.

Figure 5.1 A Class Diagram of the CWS and Supporting Classes.
The generation utility that builds the CWS requires only the URI of its eventual deployment as a parameter. It uses a static interface to generate the CWS by passing it to Axis' Java2WSDL tool, and then passing the newly generated WSDL file through its WSDL2Java tool, and finally replacing the final implementation classes with an implementation already created. Once the utility is finished, deploying it is as simple as copying the files into the Web service container and using the container provided deployment mechanism that takes as parameter the new generated web deployment descriptor.
The agent is generated with the following parameters: (1) the WSDL of the Web service from the service provider, (2) the URI of the client-side CWS, and (3) a schema definition of the return type. The procedure begins with generating a Java interface from the WSDL document using the Axis utility, WSDL2Java. From this Java interface, we extract useful information using a Java doclet, which will be used to generate the client-side framework. This information includes method name, parameter lists, and class name. The client-side connection code that was generated by WSDL2Java is copied to the shared folder of the destination directory since it will be used by the agent to actually connect to the remote service. The client-side connection code generated by Axis can be thought of as an adapter. It uses Java classes and interfaces to call the Web services on the behalf of the caller, through the framework, thus shielding the complexity of using Web services from the caller. This client-side connection code includes not only the classes used by the Agent to connect to the remote web service, but any parameter objects as well. Then, the JAXB libraries and the XJC command are used to create the classes to marshal and unmarshal the return type in the destination directory, which will be discussed in more detail in Chapter 7, along with the case study. The XJC command takes as parameters: a destination directory and a schema definition file. Then, the files that do not need customization, such as
helper objects, the client-side code to connect to the CWS, etc are copied to the shared folder of the destination directory. Finally, a Java Template class is used to customize the agent based on the collected information. The Java Template class contains a template, a file that contains a parameterized version of the agent, and uses regular expressions to perform pattern matched replacements on a template. Finally, the code is compiled and ready for use by the application.
Chapter 6: A Case Study
In this chapter, an example, a purchase order system, will be discussed as well as some of the internal details of the framework not discussed earlier. A class diagram for the Agent and supporting classes is shown in figure 6.1.
The purchase order system creates purchase orders (POs) and submits them to an "order accepting" operation of a Web service provider. Someone submitting purchase orders may have
many orders to submit and waiting for each submission's fulfillment is simply not feasible, especially since the orders may take a very long time to fulfill considering things like availability, human interactions, etc. Instead of waiting for the vendor to complete each order, the user would prefer that the vendor simply accept the order and notify the user of the results of the submission later. Also, since some items are handled by different departments within the vendor company, the departments handling each part of the PO send their partial results directly back to the user. This example is representative of existing purchase order systems and an example of how a request may lead to multiple responses. The following diagram, figure 6.2, shows a deployment diagram of the resulting system.

**Figure 6.2 Deployment Diagram of PO System**
In this system, the application and agent are related via the Observer pattern. The system follows the process outline in chapter 4. The agent and the CWS were generated using the developed code generation utilities. All the objectives outline in Chapter 4 were achieved with minimal coding effort.
The specific implementation detail that will be discussed using the specific example provided by the case study is the mechanism used to generalize the return type using the JAXB libraries. In order to utilize the JAXB libraries, an XML schema must be used. Figure 6.3 shows the XML schema that defines the result type of a PO submission, a PurchaseOrderConfirmation (POS) object which is passed to the XJC command along with a destination directory and package name:
```xml
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<xsd:element name="PurchaseOrderConfirmation" type="PurchaseOrderConfirmationType"/>
<xsd:complexType name="PurchaseOrderConfirmationType">
<xsd:sequence>
<xsd:element name="cost" type="xsd:double"/>
<xsd:element name="shippingCost" type="xsd:double"/>
<xsd:element name="manifest" type="xsd:string"/>
<xsd:element name="shippingInfo" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
```
Figure 6.3 Schema Definition of PurchaseOrderConfirmation Return Type
In our reference implementation, the service provider was also implemented in Java and used the shared folder with the generated code for unpacking/packing the return type. The code used by the service provider to marshal the return type into an XML string is shown in Figure 6.4.
```java
JAXBContext jc = JAXBContext.newInstance( "shared" );
ObjectFactory objFactory = new ObjectFactory();
PurchaseOrderConfirmation poc = objFactory.createPurchaseOrderConfirmation();
Marshaller m = jc.createMarshaller();
m.setProperty( Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE );
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
m.marshal(poc,byteStream);
String payload = new String(byteStream.toByteArray());
```
Figure 6.4 Service Provider Marshalling the Object into an XML String
The process shown in Figure 6.4 above is as follows: First, a JAXBContext is created using the package that the JAXB code generator used earlier, in this case the "shared" package. It then creates and uses a JAXB ObjectFactory to create an empty POC. A marshaller is then created, and configured. A ByteArrayOutputStream is instantiated to marshal the output to. The object is then marshalled into the byte stream and the payload string is created from the byte stream.
Figure 6.3 shows the code used by the Agent to unmarshal the XML string into a POC object.
```java
JAXBContext jc = JAXBContext.newInstance( "shared" );
Unmarshaller u = jc.createUnmarshaller();
ByteArrayInputStream inputStream = new ByteArrayInputStream(payload.getBytes());
PurchaseOrderConfirmation pocIn = (PurchaseOrderConfirmation)u.unmarshal(inputStream);
```
Figure 6.5 Agent Unmarshalling the XML String into a POC object
The process outlined by figure 6.3 is as follows: A JAXBContext object is created using the package "shared" and is then used to instantiate an Unmarshaller Object. A ByteArrayInputStream is then instantiated by passing the string in byte array form into its constructor. Then a POC is unmarshalled from the byte array stream, and then finally, ready for use by Agent.
Chapter 7: Performance Considerations
In this chapter, some performance considerations regarding the use of the proposed framework compared to a synchronous framework will be discussed. Compared to a synchronous Web service call that goes through a direct communication between the application and the Web service provider as shown in Figure 7.1(a), the proposed clearinghouse approach requires a going through a sort of communication triangle as shown in Figure 7.1(b).

(a) Synchronous call (b) Asynchronous call
Figure 7.1 Cost Comparison Diagrams
Suppose a request results in n responses. In a synchronous fashion, the client would make n calls; each call including a request and the response. Thus the total communication cost for the synchronous way will be a*n, where a represents the cost of a call. Note that this cost of a call (a) would include both the send time, response time, and return send time. Also, the call would go across the firewall. In the fashion described by the developed framework, the client’s initial call (a) is followed by n callbacks (b), and then finally c* models the communication between Agent and CWS. Note that the cost of a call (a) would include both the send time, response time, and return send time and the cost of a call (b) is logically the same as call (a). In c*, the communication being modeled is the notifications being sent as well as the retrieval of
responses from the CWS. Thus, the total communication cost for the proposed framework would be \( a + b*n + c^* \).
Since both the initial call \( a \) and the responses \( b \) go across the same logical distance across the firewall, it is easy to see that \( a \approx b \) thus producing \( b + b*n + c^* \). Also, since the CWS and the Agent are in the same local area network, the communication cost between them \( (c^*) \) is negligible. Assuming the cost of the calls from each model are similar since they cross the same logical distances we have \( a + a*n > a*n \). Thus, the overhead of the proposed approach is that there is a single extra call \( a \). Since the approach carries such a trivial cost, any developer using the framework may benefit from the enhanced functionality and ease of use without significant performance loss.
Chapter 8: Conclusion
In the context of secured enterprise environments, this thesis has only addressed the aspect of accessibility, which applies to having multiple components being able to communicate. End-to-end security measures for Web services were not addressed due to the extensive amount of research in that area, and therefore ready to apply approaches are available, such as those provided by the Web service sub-standard WS-Security family [28]. This group of SOAP extensions has established the means to provide quality of protection through message integrity, message confidentiality, and single message authentication. The main objective of the framework was to provide a means to enable client applications to call remote Web services and have the response received in an asynchronous manner, without conflicting with any network-level security measures commonly deployed in enterprise networks, such as firewalls and proxy servers. The core of the framework, the CWS, is just a Web service. Any necessary layer of security should be added as a layer on top of the framework, as one would do with any developed Web service to secure the Web service. For example, the XACML [29] and the Service View [30] describe permitting or denying access at a very fine grained level. These solutions can be used to apply different security measures for internal and external clients of the CWS, preventing outsiders from calling the operations meant for the internal agents.
As mentioned in the related works, WS-Addressing supports return addresses and message correlation which can be used to enhance the framework, but not to replace it. The major reason WS-Addressing was not utilized by the framework was that it exists only a reference specification. WS-Addressing lacks both vendor support and reference implementations. If WS-Addressing would have had such support and implementations, the
The proposed framework would have been simpler, but not obsolete. The framework would have used WS-Addressing to correlate and address responses, but still would use the CWS to handle the content based routing and as well as serve as a proxy for the return path of the responses. The agent would still register for responses based on CID and the CWS would still need to notify the agent when results are ready.
The framework that has been developed to implement Web services that support the SRMR message exchange pattern in the context of secure enterprise environments has accomplished all the objectives set forth earlier in Chapter 4. The framework leaves clients unblocked after making successful calls, releases the server immediately after the server accepts the initial call, minimizes the management of listener services throughout the enterprise, avoids inherent complexity of using Web service sub-standards, avoids using specific underlying communication protocols, supports resumable clients, shields the complexity of using SRMR messaging, and maintains the interoperability of Web services, leaving the framework platform and language neutral.
References
Vita
Michael Edward Ruth was born in New Orleans, Louisiana and received his B.S. from the University of New Orleans in December of 2002. In July 2004, he was awarded the Crescent City Doctoral Scholarship and began working as a Research Assistant under Dr. Shengru Tu. In April 2005, Michael submitted a paper for publication at the International Computer Software and Application Conference 2005, and the paper was accepted for publication.
|
{"Source-Url": "http://scholarworks.uno.edu/cgi/viewcontent.cgi?article=1277&context=td", "len_cl100k_base": 13216, "olmocr-version": "0.1.50", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 81760, "total-output-tokens": 16619, "length": "2e13", "weborganizer": {"__label__adult": 0.0002930164337158203, "__label__art_design": 0.0004017353057861328, "__label__crime_law": 0.00023114681243896484, "__label__education_jobs": 0.0020618438720703125, "__label__entertainment": 7.277727127075195e-05, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.0003478527069091797, "__label__food_dining": 0.0002467632293701172, "__label__games": 0.0003581047058105469, "__label__hardware": 0.0005779266357421875, "__label__health": 0.00030159950256347656, "__label__history": 0.00026535987854003906, "__label__home_hobbies": 6.157159805297852e-05, "__label__industrial": 0.00023281574249267575, "__label__literature": 0.00033545494079589844, "__label__politics": 0.00024235248565673828, "__label__religion": 0.00032782554626464844, "__label__science_tech": 0.01001739501953125, "__label__social_life": 9.489059448242188e-05, "__label__software": 0.00589752197265625, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00019812583923339844, "__label__transportation": 0.0004436969757080078, "__label__travel": 0.00018274784088134768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72207, 0.02005]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72207, 0.38377]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72207, 0.90474]], "google_gemma-3-12b-it_contains_pii": [[0, 787, false], [787, 1137, null], [1137, 1137, null], [1137, 1206, null], [1206, 2135, null], [2135, 4843, null], [4843, 6206, null], [6206, 7162, null], [7162, 8896, null], [8896, 10799, null], [10799, 12788, null], [12788, 13145, null], [13145, 14608, null], [14608, 16535, null], [16535, 18599, null], [18599, 20700, null], [20700, 22778, null], [22778, 23416, null], [23416, 25207, null], [25207, 26913, null], [26913, 28850, null], [28850, 30825, null], [30825, 31919, null], [31919, 32883, null], [32883, 34496, null], [34496, 35302, null], [35302, 37422, null], [37422, 39372, null], [39372, 40275, null], [40275, 42176, null], [42176, 44223, null], [44223, 46147, null], [46147, 47948, null], [47948, 49818, null], [49818, 50689, null], [50689, 52359, null], [52359, 54126, null], [54126, 55324, null], [55324, 57431, null], [57431, 57911, null], [57911, 58344, null], [58344, 59538, null], [59538, 61384, null], [61384, 62658, null], [62658, 64103, null], [64103, 64951, null], [64951, 66855, null], [66855, 68015, null], [68015, 70896, null], [70896, 71764, null], [71764, 72207, null]], "google_gemma-3-12b-it_is_public_document": [[0, 787, true], [787, 1137, null], [1137, 1137, null], [1137, 1206, null], [1206, 2135, null], [2135, 4843, null], [4843, 6206, null], [6206, 7162, null], [7162, 8896, null], [8896, 10799, null], [10799, 12788, null], [12788, 13145, null], [13145, 14608, null], [14608, 16535, null], [16535, 18599, null], [18599, 20700, null], [20700, 22778, null], [22778, 23416, null], [23416, 25207, null], [25207, 26913, null], [26913, 28850, null], [28850, 30825, null], [30825, 31919, null], [31919, 32883, null], [32883, 34496, null], [34496, 35302, null], [35302, 37422, null], [37422, 39372, null], [39372, 40275, null], [40275, 42176, null], [42176, 44223, null], [44223, 46147, null], [46147, 47948, null], [47948, 49818, null], [49818, 50689, null], [50689, 52359, null], [52359, 54126, null], [54126, 55324, null], [55324, 57431, null], [57431, 57911, null], [57911, 58344, null], [58344, 59538, null], [59538, 61384, null], [61384, 62658, null], [62658, 64103, null], [64103, 64951, null], [64951, 66855, null], [66855, 68015, null], [68015, 70896, null], [70896, 71764, null], [71764, 72207, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72207, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72207, null]], "pdf_page_numbers": [[0, 787, 1], [787, 1137, 2], [1137, 1137, 3], [1137, 1206, 4], [1206, 2135, 5], [2135, 4843, 6], [4843, 6206, 7], [6206, 7162, 8], [7162, 8896, 9], [8896, 10799, 10], [10799, 12788, 11], [12788, 13145, 12], [13145, 14608, 13], [14608, 16535, 14], [16535, 18599, 15], [18599, 20700, 16], [20700, 22778, 17], [22778, 23416, 18], [23416, 25207, 19], [25207, 26913, 20], [26913, 28850, 21], [28850, 30825, 22], [30825, 31919, 23], [31919, 32883, 24], [32883, 34496, 25], [34496, 35302, 26], [35302, 37422, 27], [37422, 39372, 28], [39372, 40275, 29], [40275, 42176, 30], [42176, 44223, 31], [44223, 46147, 32], [46147, 47948, 33], [47948, 49818, 34], [49818, 50689, 35], [50689, 52359, 36], [52359, 54126, 37], [54126, 55324, 38], [55324, 57431, 39], [57431, 57911, 40], [57911, 58344, 41], [58344, 59538, 42], [59538, 61384, 43], [61384, 62658, 44], [62658, 64103, 45], [64103, 64951, 46], [64951, 66855, 47], [66855, 68015, 48], [68015, 70896, 49], [70896, 71764, 50], [71764, 72207, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72207, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
8d3ed45696a3f1a5cef789ee0a29fd7142333271
|
A rendering system pipeline includes a memory storing shape and shade attributes of a surface of the object. The attributes are arranged as an octree in the memory. The octree includes a plurality of nodes arranged at a plurality of levels, each node storing a plurality of zero-dimensional n-tuples, each n-tuple locally approximating the shape and shade attributes of a portion of the surface of the graphic object, and the n-tuples having a sampling resolution of an image space. A plurality of parallel processing pipelines are connected to the memory. The pipelines project the shape and shade attributes of the octree to an image plane having a selected orientation by traversing the n-tuples of the nodes of the octree from a lowest resolution level to a highest resolution level.
17 Claims, 13 Drawing Sheets
FIG. 9c
951: Initialize Z-Buffer
952: Project Surfaces Depths
953: Write Over Only If $S_d < P_d$
954: Construct Tangential Disk $r > r_{\text{max2n}}$
955: Project Tangential Disk
956: Write Over Only If $f_d < P_d$
1
RENDERING PIPELINE FOR SURFACE ELEMENTS
FIELD OF THE INVENTION
This invention relates generally to graphic rendering, and more particularly to rendering zero-dimensional surface elements of graphic objects using a pipelined rendering engine.
Introduction to Computer Graphics
Three-dimensional computer graphics have become ubiquitous at the consumer level. There is a proliferation of affordable 3D graphics hardware accelerators, from high-end PC workstations to low-priced game stations. However, interactive computer graphics have still not reached the level of realism that allows a true immersion into a virtual world. For example, typical foreground characters in real-time games are extremely minimalist polygon models that often exhibit amputee artifacts, such as angular silhouettes.
Various sophisticated modeling techniques, such as implicit surfaces, or subdivision surfaces, allow the creation of 3D graphics models with increasingly complex shapes. Higher order modeling primitives, however, are eventually decomposed into triangles before being rendered by the graphics subsystem. The triangle as a rendering primitive seems to meet the right balance between descriptive power and computational burden. To render realistic, organic-looking models requires highly complex shapes with even more triangles, or, as stated by Smith in "Smooth Operator," The Economist, pp. 73–74, Mar. 6, 1999, "reality is 80 million polygons."
Processing many small triangles leads to bandwidth bottlenecks and excessive floating point number calculations and rasterization requirements. To increase the apparent visual complexity of objects, texture mapping has been introduced. Textures convey more detail inside a polygon, thereby allowing larger and fewer triangles to be used. Today's graphics engines are highly tailored for high texture mapping performance. However, texture maps have to follow the underlying geometry of the polygon model and work best on flat or slightly curved surfaces. Realistic or "organic" surfaces frequently require a large number of textures that have to be applied in multiple passes during rasterization.
Advanced rendering techniques, such as Phong shading, bump mapping, and displacement mapping, are not handled by most current consumer graphics systems. Graphic phenomena such as smoke, fire, or water are difficult to render using textured triangles.
Graphical Representations
In computer graphics, one can represent objects in 3D space in many different ways using various primitive graphic elements. The known representations that are commonly used to represent graphic objects are implicit, geometric, volumetric, and point sample.
Implicit Representation
In an implicit representation, the graphic object can be generated from arbitrary mathematical and/or physical functions. For example, to draw the outline of a hollow sphere one simply supplies the rendering engine with the function (in Cartesian coordinates) $x^2+y^2+z^2=r^2$, and for a solid sphere the function is $x^2+y^2+z^2\leq r^2$. Color and other material properties can similarly be synthetically generated. Functions can be used to describe various geometric shapes, physical objects, and real or imaginary models. Implicit functions are not suitable for synthesizing complex objects, for example, a human figure.
Geometric Representation
Classically, 3D objects have been geometrically modeled as a mesh of polygonal facets. Usually, the polygons are triangles. The size of each facet is made to correspond mostly to the degree of curvature of the object in the region of the facet. Many polygons are needed where the object has a high degree of curvature, fewer for relatively flat regions. Polygon models are used in many applications, such as, virtual training environments, 3D modeling tools, and video games. As a characteristic, geometric representations only deal with the surface features of graphic objects.
However, problems arise when a polygon model is deformed because the size of the facets may no longer correspond to local degrees of curvature in the deformed object, after all, a triangle is flat. Additionally, deformation may change the relative resolution of local regions. In either case, it becomes necessary to re-mesh the object according to the deformed curvature. Because re-meshing (polygonization) is relatively expensive in terms of computational time, it is usually done as a preprocessing step. Consequently, polygon models are not well suited for objects that need to be deformed dynamically.
Volumetric Representation
In an alternative representation, the object is sampled in 3D space to generate a volumetric data set, for example, a MRI or CT scan. Each sample is called a voxel. A typical data set may include millions of voxels. To render a volumetric data set, the object is typically segmented. Iso-surfaces can be identified to focus on specific volumetric regions. For instance, a volumetric data set of the human head may segment the voxels according to material properties, such as bone and soft tissue.
Because of the large number of voxels, physically-based modeling and the deformation of volumetric data sets is still a very computationally expensive operation. Often, one is only interested in surface features, and the interior of the object can effectively be ignored.
Point Sample Representation
A point sample representation of objects is often used to model fluid flows, for example, in wind tunnel simulations. Certain attributes, such as orientation velocity, are given to point samples in order to track individual point samples through the fluid flow, or to visualize the complete flow. Another application of point sample representation is in the visualization of "cloud-like" objects, such as smoke, dust or mist. A shading model can be applied to point samples that emit light to render cloud-like objects. Also, point samples can be constrained to subspaces with the help of energy functions to model surfaces. An advantage of point sample clouds is that the clouds are very deformable. As a disadvantage, the point samples in the cloud are unconnected and behave individually when exposed to forces. Furthermore, prior art point samples are quite unsuitable for representing surfaces of solid objects or models.
Rendering Considerations
The rendering time for these conventional primitives depends on the complexity of the objects modeled. For example, with a geometric representation of a complex object, the polygons are typically very small in size, in the order of a very small number of pixels, and the object is represented by many polygons. The polygons are usually represented with vertices that define a triangle.
To render a polygon, the projection of the triangle is scan-converted (rasterized) to calculate the intensity of each pixel that falls within the projection. This is a relatively time consuming operation when only a few pixels are covered by each polygon. Replacing the polygons with point samples
and projecting the point samples to the image can be a more efficient technique to render objects.
A number of techniques are known for rendering volumes. In general, volume rendering is quite complex. Unless the number of voxels is limited, real-time rendering can be time-consuming, or impractical for real-time applications. Discrete Particles
A real-time rendering system, described in U.S. Pat. No. 5,781,194 “Real-time Projection of Voxel-based Objects,” issued to Ponomarov et al. on Jul. 14, 1998, constructs a chain of surface voxels using incremental vectors between surface voxels. That representation succeeds in modeling and displaying objects showing highly detailed surface regions. The modeling of rigid body motion is done with the aid of scripting mechanisms that lack realism because physically-based methods are not used.
The use of points as rendering primitives has a long history in computer graphics. Catmull, in “A Subdivision Algorithm for Computer Display of Curved Surfaces,” Ph.D. thesis, University of Utah, December 1974, observed that geometric subdivision may ultimately lead to points on surfaces. Particles were subsequently used for objects, such as clouds, explosions, and fire, that could not be rendered with other methods, see Reeves in “Particle Systems—A Technique for Modeling a Class of Fuzzy Objects,” SIGGRAPH Proceedings, pp. 359–376. July 1983.
Visually complex objects have been represented by dynamically generated image sprites. Sprites are fast to draw and largely retain the visual characteristics of the object, see Shade et al. in “Layered Depth Images,” SIGGRAPH Proceedings, pp. 231–242. July 1998. A similar approach was used in the Talisman rendering system to maintain high and a constant frame rates, see Torborg et al. in “Talisman: Commodity Real-Time 3D Graphics for the PC,” SIGGRAPH Proceedings, pp. 353–364, August 1996. However, mapping objects onto planar polygons leads to visibility errors and does not allow for parallax and disocclusion effects. To address these problems, several methods add per-pixel depth information to images, variously called layered impostors, sprites with depth, or layered depth images, just to name a few. Still, none of these techniques provides a complete object model that can be illuminated and rendered from arbitrary points of view. All these methods use view-dependent, image centered samples to represent an object or scene. However, view-dependent samples are ineffective for dynamic scenes with motion of objects, changes in material properties, and changes in position and intensities of light sources.
Levoy et al. in “The Use of Points as a Display Primitive,” University of North Carolina Technical Report 85-022, 1985, describe a process for converting an object to a point representation. There, each point has a position and a color. They also describe a process to render the points as a smooth surface. The points are modeled as zero-dimensional samples, and are rendered using an object-order projection. When rendering, multiple points can project to the same pixel and the intensities of these points may need to be filtered to obtain a final intensity for the pixel under consideration. This filtering is done by weighting the intensity proportional to the distance from the projected point position in the image to the corresponding pixel-center, whereas the weights are normalized according to the partial coverage of a pixel by a surface. The coverage is estimated by calculating the density of the projected points in image space and then the weighting is modeled with a Gaussian filter. An enhanced depth-buffer (z-buffer) allows for depth comparisons with a tolerance that enables the blending of points in a small region of depth-values. Their point representation allows one to render the object from any point of view.
In another technique, as described by Grossman et al. in “Point Sample Rendering,” Proceedings of the Eurographics Workshop ’98, Rendering Techniques 1998, pp. 181–192, July 1998, the point samples are obtained by sampling orthographic projections of an object on an equilateral triangle lattice. The equilateral triangle lattice was preferred to a quadrilateral one because the spacing between adjacent sampling points is more regular.
Dally et al., in “The Delta Tree: An Object-Centered Approach to Image-Based Rendering,” Technical Report AM-1604, MIT, May 1996, introduced the delta tree as an object-centered approach to image-based rendering. The movement of the viewpoint in their method, however, is still confirmed to particular locations.
All of the known representations have some limitations. Therefore, what is needed is an object representation that combines the best features of each and simplifies rendering.
**SUMMARY OF THE INVENTION**
The present invention provides a method for rendering objects with rich shapes and textures at interactive frame rates. The method is based on surface elements (surfsels) as rendering primitives. Surfels are point samples of a graphics model. In a preprocessing stage, the surfaces of complex geometric models are sampled along three orthographic views. The invention adaptively samples the object using image space resolution. At the same time, computation-intensive calculations such as texture, bump, or displacement mapping are performed. By moving rasterization and texturing from the core rendering pipeline to the preprocessing step, the rendering cost is dramatically reduced.
From a rendering point of view, the surfer representation according to the invention provides a discretization of the geometry, and hence, reduces the object representation to the essentials needed for rendering. By contrast, triangle primitives implicitly store connectivity information, such as vertex valence or adjacency—data not necessarily available or needed for rendering.
Storing normals, prefiltered textures, and other per surfel data enables one to build high quality rendering processes. Shading and transformations is applied on a per surfel basis to achieve Phong illumination, bump and displacement mapping, as well as other advanced rendering features.
The rendering also provides environment mapping with a painterly surfel rendering process running at interactive frame rates. A hierarchical forward projection algorithm allows one to estimate the surfel density per output pixel for speed-quality tradeoffs.
A surfel rendering pipeline complements existing graphics pipelines. The pipeline trades memory overhead for rendering performance and quality. The present invention is suitable for interactive 3D applications, particularly for organic objects with high surface details, and for applications where preprocessing is not an issue. These qualities make the present invention ideal for interactive games.
Surfels according to the invention are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. Unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels
are point primitives without explicit connectivity. Surfels attributes comprise depth, texture color, normal, and others. As a preprocess, an octree-based surfel representation of a geometric object is constructed. During sampling, surfel positions and normals are optionally perturbed, and different levels of texture colors are prefiltered and stored per surfel in a view independent manner.
During rendering, a hierarchical forward warping algorithm projects surfels to a z-buffer (depth buffer). A novel method called visibility splitting determines visible surfels and holes in the z-buffer. Visible surfels are shaded using texture filtering, Phong illumination, and environment mapping using per-surfel normals. Several methods of image reconstruction, including supersampling, offer flexible speed-quality tradeoffs. Due to the simplicity of the operations, the surfel rendering pipeline is amenable for a hardware implementation. Surfel objects offer complex shape, low rendering cost and high image quality, which makes them specifically suited for low-cost, real-time graphics, such as games.
More particularly, a rendering system includes a memory storing shape and shade attributes of a surface of the object. The attributes are arranged as an octree in the memory. The octree includes a plurality of nodes arranged at a plurality of levels, each node storing a plurality of zero-dimensional n-tuples, each n-tuple locally approximating the shape and shade attributes of a portion of the surface of the graphic object, and the n-tuples having a sampling resolution of an image space. A plurality of parallel processing pipelines are connected to the memory. The pipelines project the shape and shade attributes of the octree to an image plane having a selected orientation by traversing the n-tuples of the nodes of the octree from a lowest resolution level to a highest resolution level.
The graphic object is sampled by casting rays through the object. The rays originate at orthogonal planes surrounding the object. The surface of the object is sampled for shape and shade attributes at points where the rays intersect the surface. The sampled shape and shade attributes of each sampled point are stored in the octree stored in the memory.
Shade attributes of the surface points of the a graphic object are filtered by constructing tangential disks at positions of each surface point. The tangential disks have increasingly larger radii. Each tangential disk is projected to an ellipse in texture space. View independent filter functions are applied at the position of each surface point to generate texture mipmap for the surface point. The filter functions have an extent equal to the projected tangential disk. The surface point is projected to the pixels in the depth buffer, and a view dependent filter function is applied to each pixel in the image buffer to determine colors for the pixels.
**BRIEF DESCRIPTION OF THE DRAWINGS**
FIG. 1 is a diagrammatic of a surfel of a graphics object according to the invention;
FIG. 2 is a block diagram of a preprocessing sampling stage;
FIG. 3 is a block diagram of a surfel rendering pipeline;
FIG. 4 is a diagrammatic of layered depth cube sampling methods;
FIGS. 5a–b are diagrammatics of texture prefiltering;
FIGS. 6a–b are diagrammatics of two levels of a LDC tree;
FIGS. 7a–b are diagrammatics of LDC reduction;
FIGS. 8 is diagrammatics of surfel density estimation;
FIGS. 9a–b are diagrammatics of visibility splatting;
FIG. 9c is a flow diagram of a method for storing depth values in a z-buffer;
FIGS. 10a–b are diagrammatics of projected surfel texture mipmap;
FIG. 11 is a diagrammatic of view-dependent texture filtering, and
FIGS. 12a–b are diagrammatics of image reconstruction.
**DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS**
**Conceptual Overview of Surfels**
We describe an image-based adaptive sampling and object-based rendering of graphic objects represented as surface elements, i.e., "surfels." As shown in FIG. 1, we define a surfel 100 as a zero-dimensional n-tuple with shape and shade attributes that locally approximates a portion 101 of a surface 102 of a graphic object 103. The shape attributes can include position, orientation, and depth information of the object. The shade attributes can include texture, material properties, and opacity. As described in greater detail below, we store surfels as a reduced layered depth cube (LDC) tree.
The surfel representation of our invention is a projection of an image space resolution into the object space, resulting in an arbitrary 2D manifold. In other words, our surfel position attributes have object space coordinates with image space resolution. Surfel manifolds can be connected to each other to form a more complex 2D manifold. The manifold "outlines" arbitrary objects, real or imagined.
In contrast with our techniques, prior art rendering primitives are usually sampled with an object space resolution. Our representation combines object space rendering and image space sampling by defining a mapping between object surfels and image plane pixels. Surfels are generated according to the image resolution. Thus, no detail smaller than a pixel is considered when sampling the object. By combining the object space coordinates and image space resolution sampling, we provide rendering that is simple, efficient and fast. We describe an object-order projection process to pixels. Using a technique called visibility splatting, occluded surfels are discarded, and a continuous 2D image is reconstructed using interpolation techniques.
Sampling according to the image space resolution provides a direct correspondence between sampled object space and image space. By defining surfels this way, rendering of objects becomes easier in the sense that resampling of the object is not required during rendering, no matter what the viewing direction. Thus, rendering "surfelized" objects is more efficient. A surfel grid, with image space resolution, allows us to render presampled objects for any viewing direction.
Table A compares prior art polygons, voxels, and point samples with surfels according to our invention. The table shows that our surfels have attributes similar to known prior art representation primitives.
**Table A**
<table>
<thead>
<tr>
<th>Property</th>
<th>Polygons</th>
<th>Voxels</th>
<th>Points</th>
<th>Surfels</th>
</tr>
</thead>
<tbody>
<tr>
<td>Geometry</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Sampling</td>
<td>Object</td>
<td>Object</td>
<td>Object</td>
<td>Screen</td>
</tr>
<tr>
<td>Grid</td>
<td>No</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Connected</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Deformation</td>
<td>Semi-hard</td>
<td>Hard</td>
<td>Easy</td>
<td>Easy</td>
</tr>
</tbody>
</table>
In some ways, a surfel has the attributes of a pixel of a converted polygon when the polygon has a size of about one
pixel. A surfel can also be considered as an extracted 8-connected surface voxel, where the cell in which the surfel is located has a dimension of 1×1×1 pixel, and has six adjacent surfels. A surfel object can also be thought of as a mapping of a particle cloud that is defined on the resolution of the image grid.
Surfels also have differences. For example, surfels are unlike voxels and particles in their geometry. Surfels are unlike polygons and particles with respect to a grid. Surfels are unlike voxels and particles in the way that neighboring elements are related. Surfels are unlike points in that they are sampled according to an expected output screen resolution and not according to object space criteria. Surfels also are different from points in that they have not an explicit but an implicit connectivity that arises from the discrete surfel sampling grid.
Compared with prior art primitives, the most important difference in the way that we define surface elements is that surfels are sampled according to the image space resolution. Voxels and particles are usually sampled according to the object space resolution. Polygons can be sampled at image resolution, however, the sampling must be done just prior to projection and rendering when the object is deformed because the sampling is view dependent. For surfels, the sampling to image resolution can be done once in a pre-processing step because the sampling is view independent.
In the image space resolution sampling according to our invention, graphic objects include just enough surfels to reconstruct the surface of the object by simple projection of the surfels to an image plane, followed by image reconstruction. For example, a rectangular surfel polygon of 100 by 100 surfels will produce 100 by 100 pixels on the image plane. The image plane is physically expressed as pixels in an image buffer. Normally, the contribution of a surfel to the image will be about one to one.
Preprocessing and Rendering
Our invention deals with graphic objects in two stages, preprocessing and rendering. In a preprocessing stage, we sample a graphic object and then filter the sampled data. We preprocess a particular graphic object only once. The sampling can be done by software programs. Because this is a one time operation, sophisticated techniques can be used to extract as much attribute information from the object as possible, and to render the sampled object to a data structure that is efficient to render for any viewing direction to produce quality images. In the rendering stage, we render the data structure. Here, we use a hardware pipeline. Pragmatically, we do the hard work once, so that the work we have to do many times becomes easy. This makes our pipelined surfel rendering well-suited for animation applications.
Sampling and Prefiltering
FIG. 2 is a high level block diagram of a sampling preprocessing stage 200. An adaptive sampling process 210 converts a graphic object 201 and its textures attributes to surfels 211. During sampling, we use ray casting to arrange the surfels in three orthogonal layered depth images (LDIs). The LDIs store multiple surfels along each ray, one for each ray-surface intersection point. We call this arrangement of three orthogonal LDIs a layered depth cube (LDC) or "block." For example, we can use a sampling resolution of 512 for an expected output resolution of 480. That is, we chose a sampling resolution to provide a predetermined image quality.
A prefILTERING step 220 is described in greater detail below. The main purpose of this step is to extract view-independent texture attributes of the blocks. In our data structure, a LDC "block" is attached to each leaf node of an octree 221. Octrees are well known in computer graphics see for example, Veenstra et al. in "Line drawings of octree-represented objects, ACM Transactions on Graphics, Vol.7, No.1., pp. 61–75, January 1988. The octree is used to index three dimensions. Each level of our LDC tree corresponds to a different resolution of the surfel object. In a data reduction step 230, we optionally reduce each block tree to a reduced LDC tree 231. Preferably, the reduction can be three to one. This reduces storage costs, and further improves rendering performance.
An important and novel aspect of our sampling method is a distinction between sampling shape, geometry and sampling shade, (texture color). A surfel stores shape attributes, such as surface position, and orientation, e.g., the surface normal 104 in FIG. 1. In our preferred embodiment, the x-y position is implicitly defined by the location of the block (node) in the LDC tree 221, that is, explicit x-y coordinates are not stored. Depth information (z coordinates) are explicitly stored in the octree. The orientation of the surface is given by the surface normal 104, see FIG. 1. Instead of actually storing a normal, we store an index to a quantized normal table that is used during reflection and environment map shading. As stated above, the shape attributes are based on object space.
Shade is expressed as multiple levels of prefILTERED texture colors. We call this novel hierarchical color information a surfel texture mipmap. During the prefILTERING 220, other view-independent methods, such as bump and displacement mapping, can also be performed to extracted shape and shape attributes.
Table B gives the minimum storage requirements per surfel.
<table>
<thead>
<tr>
<th>Data Field</th>
<th>Storage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Three Surfel Texture Mipmap Levels</td>
<td>5 × 24 bits</td>
</tr>
<tr>
<td>Index to Normal Table</td>
<td>16 bits</td>
</tr>
<tr>
<td>LDI Depth Value</td>
<td>32 bits</td>
</tr>
<tr>
<td>Index to Material Table</td>
<td>16 bits</td>
</tr>
<tr>
<td>Total Bytes per Surfel</td>
<td>37 bytes</td>
</tr>
</tbody>
</table>
The size of the LDC tree is about a factor of two larger than the sampled data due to overhead, e.g., pointers, in the octree data structure. The LDC tree can be substantially compressed by run length coding or wavelet-based compression techniques.
Rendering Pipeline
FIG. 3 shows a rendering pipeline 300 for our surfels. The pipeline hierarchically projects the blocks (nodes) of the LDC tree blocks to pixels of the image plane 399 using perspective projection. Note, the orientation of the image plane for the purpose of rendering can be arbitrary, and different than the orientation of the three orthogonal depth images used during sampling.
The rendering is accelerated by block culling 310 and fast incremental forward warping 320. We estimate the projected surfel density per output pixel to control rendering speed and quality of the image reconstruction.
A depth-buffer (z-buffer), together with a novel method called visibility splatting 330 solves a visibility problem. Here, tangential disks, at each surfel, are scan-converted into a z-buffer in order to detect surface holes and prevent hidden (occluded) surfels from being used in the reconstruction process.
Texture colors of visible surfels are filtered 340 using linear interpolation between appropriate levels of the surfel
As shown in FIG. 5b, each tangential disk is mapped to an ellipse 503 in texture space using a predefined texture parameterization of the surface. A Gaussian kernel can be used to filter the texture. The resulting color is assigned to the surface. To enable adequate texture reconstruction, the circles and elliptical filter footprints (dotted lines) in texture space overlap each other as shown in FIGS. 5a–b. Consequently, we choose $s_{max} = 3b$ the maximum distance between adjacent surfels in object space, as the radius for the tangential disks. This usually guarantees that the tangential disks overlap each other in object space and that their projections in texture space overlap. Because we use a modified z-buffer filling method to resolve visibility, as described below, not all surfels may be available for image reconstruction. This can lead to texture aliasing artifacts. Therefore, we store several, typically at least three prefiltered texture samples per surfel. The tangential disks have increasingly larger radii. Each of the disks is mapped to texture space and used to compute the prefiltered colors. We call the prefiltered colors a surfel texture mipmap. FIG. 5f shows the elliptical footprints 503–505 of the increasingly larger elliptical tangential disks in texture space.
**Data Structure**
We use an efficient hierarchical data structure to store the LDCs acquired during sampling. The LDC octree 221 allows us to quickly estimate the number of projected surfels per pixel and to trade rendering speed for higher image quality.
**LDC Tree**
We avoid resampling and splatting during image reconstruction by storing LDCs at each node (block) in the octree that are subsampled versions of the highest resolution LDC. Our octree is recursively constructed from the bottom up. The highest resolution LDC—acquired during geometry sampling—is stored at the lowest level (n=0) of the LDC tree, and the lowest resolution at the top.
As shown in FIGS. 6a–b for two dimensions, each LDC can be subdivided into blocks with user-specified dimension 601. FIG. 6a shows the highest resolution blocks of the LDC tree using a 2D drawing. Blocks (nodes) on higher levels of the octree, i.e., lower resolution, are constructed dyadically, i.e., by subsampling their children at multiples of some power of two. FIG. 6b shows level n=1 of the LDC tree. Note that surfels at higher levels of the octree 602 reference surfels in the LDC level 0 604, i.e., surfels that appear in several blocks of the hierarchy are stored only once, and are shared between blocks.
If the highest resolution LDC has a pixel spacing of $h$, then the LDC at level n has a pixel spacing of $2^nh$. The height of the LDC tree is selected by the user. Choosing a height of one flattens the hierarchy, storing only the highest resolution LDC. Because the LDC tree naturally stores a level-of-detail representation of the surfel object, its lowest resolution usually determines the height of the octree.
Empty blocks 603, shown as white squares in FIG. 6a, are not stored in the LDC tree. Consequently, the block dimension 601 is not related to the dimension of the highest resolution LDC, and can be selected arbitrarily. Choosing the block dimension b=1 makes the LDC tree a fully volumetric octree representation.
**Three-to-One Reduction**
To reduce storage and rendering time, it can be useful to optionally reduce the LDC tree to a layered depth image on a block-by-block basis. Because this typically corresponds to a three-fold increase in rendering speed, we call this step 3-to-1 reduction 230. First, we choose one LDI in the block as the target LDI. We warp and resample the two remaining LDIs to the pixels of the target LDI.
As shown in FIGS. 7a and 7b, surfels 701–702 in FIG. 7a are resampled to grid locations of sampling ray intersections 703–704 as shown in FIG. 7b. We use nearest neighbor interpolation, although more sophisticated filters, e.g. splatting can also be implemented. The resampled surfels are stored in the reduced LDC tree 231.
The reduction and resampling process degrades the quality of the surfel representation, both for shape and for shade. Surfels using a block bounding box 706 from the same surface may have very different texture colors and normals. We could compare distances against a threshold to determine when surfels belong to the same surface. However, surfels from different surfaces may be closer than the threshold, which usually happens for thin structures. This may cause color and shading artifacts that are worsened during object motion. In practice, however, we did not encounter severe artifacts due to 3-to-1 reduction. Because our rendering pipeline handles LDCs and LDIs the same way, we can store blocks with thin structures as LDCs, while all other blocks can be reduced to single LDIs.
We can determine bounds on the surfel density on the surface of the object after 3-to-1 reduction. Given a target LDI with pixel spacing h, the maximum distance between adjacent surfels in the z-buffer is 0.5h^3. Because the maximum distance between surfels increases to s_{min}^3h due to the elimination of redundant surfels, making the imaginary Delaunay triangulation on the surface more uniform.
Rendering Pipeline
The rendering pipeline 300 takes the surfel LDC tree 221 or the reduced LDC tree and renders it as an image 399 using hierarchical visibility culling and forward warping of blocks for a particular image plane orientation. Hierarchical rendering also allows us to estimate the number of projected surfels per output pixel. For maximum rendering efficiency, we project approximately one surfel per pixel and use the same resolution for the z-buffer as in the output image. For maximum image quality, we project multiple surfels per pixel, using a finer resolution of the z-buffer and high quality image reconstruction.
Block Culling
We traverse the LDC tree from top, i.e., the lowest resolution nodes, to the bottom or the highest resolution nodes. For each block/node, we first perform view frustum culling. For each block/node, we can determine bounds on the surface of the object by checking the ray intersections with the bounding box. Because the bounding box orientation can be arbitrary, different views may reveal different portions of the object. Next, we use visibility cones to perform the equivalent of backface culling of blocks. Using the surfel normals, we pre-compute a visibility cone per block which gives a fast, conservative visibility test—no surfel in the block is visible from any viewpoint within the cone. In contrast to prior art point sampling rendering, we perform all visibility tests hierarchically in the LDC tree, which makes our tests more efficient.
Block Warping
As shown in FIG. 8, to choose the octree level to be projected, we conservatively estimate, for each block, the number of surfels per pixel. We can choose one-surfel per pixel for fast rendering, or multiple surfels per pixel for supersampling. The number of surfels per pixel is determined by \( d_{\text{max}} \). The value of \( d_{\text{max}} \) is the maximum distance between adjacent surfels in image space.
We estimate \( d_{\text{max}} \) per block by projecting the four major diagonals 312 of the block bounding box 311. For orthographic projection, their maximum length is an upper bound on \( d_{\text{max}} \). The error introduced by using orthographic projection is small because a block Typically projects to a small number of pixels.
During rendering, the LDC tree is traversed top to bottom. At each level, \( d_{\text{max}} \) is compared to the radius \( r_{802} \) of the pixel reconstruction filter. If \( d_{\text{max}} \) of the current block is larger than \( r \), then its children are traversed. We project the block whose \( d_{\text{max}} \) is smaller than \( r \), then rendering approximately one surfel per pixel. The surfel density per pixel can be increased by choosing a smaller \( r \), e.g., making \( r \) the diagonal of a subpixel. During forward warping, \( d_{\text{max}} \) is stored with each projected surfel for subsequent use in the visibility splatting and the image reconstruction stages.
To warp the position attributes of the LDC blocks to image space, we use an optimized incremental block warping. Recall, the positional attributes of our surfels are expressed with object space coordinates. Hence, we warp from object to image space. This warping is highly efficient due to the regularity of our LDCs. The LDIs in each LDC block are warped independently, which allows us to render an LDC tree where some or all blocks have been reduced to single LDIs after 3-to-1 reduction as described above.
Visibility Splatting
Perspective projection, high z-buffer resolution, and magnification or zooming may lead to undersampling or “holes” in the z-buffer. This would not occur in the original image. Recall, the positional attributes of our surfels are expressed with object space coordinates. Hence, we warp from object to image space. This warping is highly efficient due to the regularity of our LDCs. The LDIs in each LDC block are warped independently, which allows us to render an LDC tree where some or all blocks have been reduced to single LDIs after 3-to-1 reduction as described above.
Depth Buffer
We populate the z-buffer with depth values as shown in the steps 950 of FIG. 9C.
Each pixel in the block stores a pointer to a nearest surfel, i.e., the surfel, which has the smallest depth (\( d \)) value, and a current minimum depth value. Pixels are also marked as “holes,” or not. The pixels of our z-buffer are initialized 951 with maximum depth values, e.g., “infinity” or a background scene, and no holes.
Surfels depths are projected 952 to the z-buffer using nearest neighbor interpolation. Recall from Table B, the surfel depth is stored with each surfel. The z-buffer offers a good tradeoff between quality and speed, and our z-buffer can be integrated with traditional polygon graphics rendering methods, such as OpenGL. Depth values of the z-buffer pixels are compared to the maximum depth value of the surfel (\( s \)). If the depth value of the surfel (\( s \)) is less than the depth value of the pixel (\( p \)). Thus, only surface features lying in front of other surface features are visible. In step 954, tangential disks 501 are constructed for each surfel 502 in object space.
As shown in FIGS. 9a–b, we scan-convert the projection of surfel tangential disks 501 into the z-buffer 900 to correctly resolve visibility problems due to holes and back facing surfaces. The tangential disks that are constructed 954 have a radius of \( r_{\text{surf}} \), where \( s_{\text{max}} \) is the maximum distance between adjacent surfels in object space and \( n \) is the level of the block. The disks have an orientation determined by the surfel normal 502.
As shown in FIG. 9b after projection 954, the tangential disks form an ellipse 901 around the surfel. We approximate the ellipse 901 with a partially axis-aligned bounding box 902. The bounding box parallelogram is scan-converted, and each z-buffer pixel is filled with the appropriate depth, depending on the surfel normal \( N_{502} \). That is, if a depth value is less than a previously stored depth value, the stored depth value is overwritten.
We use orthographic projection in step 955 for our visibility splatting to simplify the calculations. The direction of a minor axis \( a_{\text{min}} \) of the projected ellipse is parallel to the
projection of the surfel normal. A major axis $a_{xxx}$, 912 is orthogonal to $a_{xxx}$. The length of the major axis is the projection of $a_{xxx}$, which is approximated by $d_{xxx}$, 801 of FIG. 8. This approximation takes the orientation and magnification of the LDC tree during projection into account.
Next, we calculate the coordinate axis that is most parallel to $a_{xxx}$, e.g., the y-axis 913 in FIG. 9b. The short side of the bounding box is axis aligned with this coordinate axis to simplify scan conversion. The height $h$ 914 of the bounding box is determined by intersecting the ellipse with the coordinate axis. The width $w$ 915 of the bounding box is determined by projecting the vertex at the intersection of the major axis and the ellipse onto the x-axis.
The values $\frac{\partial z}{\partial x}$ and $\frac{\partial z}{\partial y}$ are the partial derivatives of the surfel depth $z$ with respect to the image x and y direction. These are constant because of the orthogonal projection and can be calculated from the unit normal. During scan conversion, the depth at each pixel inside the bounding box is calculated using the partial derivatives $\frac{\partial z}{\partial x}$ and $\frac{\partial z}{\partial y}$. In addition, we add a small threshold $\epsilon$ to each projected z value. The threshold $\epsilon$ prevents the surfels that lie under the disk but still on the foreground surface from accidentally being discarded. In step 956, the depth values of the pixels $(p_i)$ are overwritten with the depth values of the projected tangential disk $(\tau_i)$ if $\tau_i < \epsilon$.
If the surface is extremely curved, then the tangential disks may not cover the surface completely, potentially leaving tears and holes. In addition, extreme perspective projection makes orthographic projection a bad approximation to the actual projected tangential disk. In practice, however, we did not see this as a major problem. If the projected tangential disk is a circle, i.e., the disk is almost parallel to the viewing direction, then the bounding box parallelogram is a bad approximation. In this case, we use a square bounding box instead.
It should be noted that our method for determining z-buffer depth values can also be used with polygons that are rasterized to pixels, with voxels, and other traditional point representations of objects. Our method can populate any z-buffer, independent of the underlying representation of the graphic object.
**Texture Filtering**
As described above, each surfel in the LDC tree stores several prefiltered texture colors in the surfel texture mipmap. During rendering, the surfel color is linearly interpolated from the surfer texture mipmap colors depending on the object minification and surface orientation.
FIG. 10b shows all visible surfels of a sampled surface projected to the z-buffer. The ellipses 1001 around the centers of the surfels mark the projection of the footprints of the highest resolution texture prefilter, as described above. Note that during prefiltering, we try to cover the entire surface with footprints. In FIG. 10b, the number of samples per z-buffer pixel is limited to one by applying z-buffer depth tests. A surfel pointer in the z-buffer is replaced with another pointer when another closer surfel is located for the same pixel.
In order to fill the gaps appearing in the coverage of the surface with texture footprints, the footprints of the remaining surfels have to be enlarged. If surfels are discarded in a given z-buffer pixel, then we can assume that the z-buffer pixels in the 3×3 neighborhood around the discarded pixels are not holes. Thus, the gaps can be filled when the texture footprint is defined by a disk at least the area of a z-buffer pixel. Consequently, the ellipse of the projected footprint has to have a minor radius of $\frac{\sqrt{2}}{2}$ in the worst case, where s is the z-buffer pixel spacing. We ignore the worst case and use $\frac{\sqrt{3}}{2}$, implying that surfels are projected to z-buffer pixel centers. FIG. 10b shows the scaled texture footprints 1002 as ellipses around projected surfels.
As shown in FIG. 11, we use view-dependent texture filtering to select the appropriate surfel texture mipmap level. A circle 1101 with radius $\frac{\sqrt{3}}{2}$ is projected through an image space pixel onto a tangential plane 1102 of the surface from the direction of the view 1103, producing an ellipse 1104 in the tangent plane. The projection of the pixel is approximated with an orthographic projection. Similar to isotropic texture mapping, the major axis of the projected tangent space ellipse is used to determine the surfel mipmap level. The surfel color is determined by linear interpolation between the closest two mipmap levels. This is a linear interpolation between two samples, as opposed to interpolating eight samples as in tri-linear mipmapming.
**Shading**
In the prior art, an illumination model is typically applied before visibility testing. However, deferred shading after visibility splatting according to the invention avoids unnecessary work. Also, prior art particle shading is usually performed in object space to avoid transformation of normals to image space. However, we have already transformed the normals to image space during our visibility splatting as described above. With the transformed normals at hand, we can use cubic reflectance and environment maps to calculate a per surfel Phong illumination model with global effects.
Since shading with per surfel normals results in specular highlights that are of ray tracing quality.
**Image Reconstruction and Antialiasing**
To reconstruct a continuous surface from projected surfels is fundamentally a scattered data interpolation problem. In contrast with the prior art techniques such as splatting, we separate visibility calculations from image reconstruction. We mark z-buffer pixels with holes during our innovative visibility splatting as described above. These hole pixels are not used during image reconstruction because they do not contain any visible samples.
FIGS. 12a–b show image reconstruction in the z-buffer according to the invention. In FIG. 12a, the image (frame) buffer has the same resolution as the z-buffer. Surfels are mapped to pixel centers 1201 using nearest neighbor interpolation as shown with cross hatching. Holes 1202 are marked with a black X.
Recall, during forward warping, each surfel stores $d_{xxx}$ as an estimate of the maximum distance between adjacent projected surfels of a block. This distance is a good estimate for the minimum radius of a pixel filter that contains at least one surfel. To interpolate the holes, we can use, for example, a radially symmetric Gaussian filter with a radius slightly larger than $d_{xxx}$ positioned at hole pixel centers. Alternatively, to fill the holes, we can also adapt a pull-push method as described by Gortler et al. in "The Lumigraph" Computer Graphics, SIGGRAPH Proceedings, pp. 43–54, August 1996.
As shown in FIG. 12b, a high-quality alternative uses supersampling. Here, an output image resolution is half, or
some other fraction of the z-buffer resolution. Rendering for
supersampling proceeds as before. During image
reconstruction, we put a Gaussian filter at the centers of all
output pixels to filter the subpixel colors. The radius of the
filter is again $d_{\text{max}}$, to cover at least one surfel. The minimum
radius is
$$\sqrt{\frac{2}{\pi}} \times \delta,$$
where is the sidelength of an output pixel.
In yet another embodiment, we adapt an interactive ver-
tion of the painterly rendering algorithm as described by
Meier in “Painterly Rendering for Animation” SIGGRAPH
Proceedings, pp. 477–484, August 1996. In our adaptation,
we render an oriented brush texture with per pixel alpha
version at each visible surfel in the z-buffer. The texture is
centered at each visible surfel, and its RGBA pixels are
multiplied by the surfel color. To orient the “brush,”
the surfel normal in image space is orthographically projected to
the image plane and the texture is axis aligned with the
resulting vector. The texture is then mapped to an output
depth pixel using image space rasterization, similar to texture
splitting. The brush size for each surfel can be the same, or
per surfel normal or texture derivatives can be used to scale
the textures. Alternatively, each surfel could store an index
into a table with brush type, orientation, and size. In contrast
to Meier, we do texture splitting after visibility splitting.
It is instructive to described how the color of an output
pixel is determined for regular rendering and for supersam-
pling in the absence of holes. For regular rendering, the pixel
color is determined by nearest neighbor interpolation from the
closest visible surfel in the z-buffer. The color of that
surfel is determined by linear interpolation between two
surfel texture mipmap levels. Thus, the output pixel color is
determined from two prefiltered texture samples. In the case of
supersampling, one output pixel contains the filtered colors of
one surfel per z-buffer subpixel. Thus, up to eight
prefiltered texture samples may contribute to an output pixel
for 2x2 supersampling. This produces image quality similar to
tri-linear mipmapmapping.
Our method with hierarchical density estimation, visibility
splitting, and surfel texture mapping achieves offers more
flexible speed-quality tradeoffs than comparable prior art
rendering systems.
A major advantage of our surfel rendering is that any kind
of synthetic or scanned object can be converted to surfels.
For example, we can sample volume data, point clouds, and
LIDIs of non-synthetic objects. Using an occlusion compat-
ible traversal of the LDC tree, we enable order-independent
transparency and true volume rendering. The hardware
design of the surfel rendering pipeline is straightforward.
Block warping involves only two conditionals for z-buffer
tests. We do not need to perform clipping calculations. All
frame buffer operations, such as visibility splitting and image
reconstruction, can be implemented using standard
rasterization and frame buffer techniques. Our rendering
pipeline uses no inverse calculations, such as looking up
textures from texture maps. Runtime texture filtering
becomes simple with our pipeline. There is a high degree of
data locality because shape and shade information can be
loaded into the pipeline simultaneously with the surfel
positional data. Consequently, caching will further improve
performance.
Our surfel rendering is ideal for organic models with very
high shape and shade complexity. Because we do rasteriza-
tion and texture filtering in the preprocessing stage, and not
in the pipeline, the rendering cost per pixel is dramatically
reduced. Rendering performance is essentially determined
by warping, shading, and image reconstruction. These
operations can easily exploit vectorization, parallelism, and
pipelining. Our surfel rendering pipeline offers several
speed-quality trade-offs. By decoupling image reconstruc-
tion and texture filtering, we achieve much higher image
quality than comparable prior art point sample approaches.
We introduce visibility splitting, which is very effective at
detecting holes and increases image reconstruction perfor-
mance. Antialiasing with supersampling is naturally inte-
grated in our system. Our pipeline is capable of high image
quality at interactive frame rates.
Although the invention has been described by way of
examples of preferred embodiments, it is to be understood
that various other adaptations and modifications may be
made within the spirit and scope of the invention. Therefore,
it is the object of the appended claims to cover all such
variations and modifications as come within the true spirit
and scope of the invention.
We claim:
1. A method for rendering a graphic object, comprising the
steps of:
representing shape and shade attributes of a surface of the
object in an octree stored in a memory, the octree
including a plurality of nodes arranged at a plurality of
levels, each node storing a plurality of zero-
dimensional n-tuples, each n-tuple locally approxi-
mating the shape and shade attributes of a portion of the
surface of the graphic object, and the n-tuples having a
sampling resolution of an image space; and
2. The method of claim 1 further comprising the steps of:
executing software programs in a uniprocessor to repre-
sent the shape and shade attributes of the object in the
octree; and
3. The method of claim 1 wherein the graphic object is
adaptively sampled according to a resolution of the image
plane.
4. The method of claim 1 further comprising the steps of:
warping the n-tuples in space object to the image plane
having an arbitrary selected orientation.
5. The method of claim 4 wherein each n-tuple octree is
warped independently.
6. The method of claim 1 further comprising the steps of:
projecting a particular n-tuple onto a corresponding pixel of
a depth buffer;
7. Storing a depth value of the projected n-tuple in the pixel
only if the depth value of the projected n-tuple is less
than a previously stored depth value of the pixel;
8. Constructing a tangential disk at a position of the corre-
sponding n-tuple, the tangential disk having a radius
greater than a maximum distance between the n-tuples;
9. Projecting the tangential disks onto corresponding subsets
of the pixels;
10. Storing depth values of the projected tangential disk in the
corresponding subset of pixels only if the depth values of
the projected tangential disk are less than the depth
values of the corresponding subset of pixels.
7. The method of claim 1 wherein the shade attributes include a plurality of texture mipmap maps, and further comprising the steps of:
interpolating the plurality of texture mipmap maps to determine colors of the n-tuples.
8. The method of claim 7 further comprising the step of:
filtering the colors using a filter having a radius equal to
\[ \frac{\sqrt{s}}{2^s} \]
where \( s \) is the distance between the n-tuples.
9. The method of claim 8 wherein the filtering is dependent a density of the projected n-tuples.
10. The method of claim 6 wherein a resolution of the depth buffer is greater than the resolution of the image plane.
11. An apparatus for rendering a graphic object, comprising:
a memory storing shape and shade attributes of a surface of the object, the attributes arranged as an octree in the memory, the octree including a plurality of nodes arranged at a plurality of levels, each node storing a plurality of zero-dimensional n-tuples, each n-tuple locally approximating the shape and shade attributes of a portion of the surface of the graphic object, and the n-tuples having a sampling resolution of an image space; and
a plurality of parallel processing pipelines connected the memory, the pipelines projecting the shape and shade information of the octree to an image plane, the image plane having a selected orientation by traversing the n-tuples of the nodes of the octree from a lowest resolution level to a highest resolution level.
12. The apparatus of claim 11 wherein each node is stored as a block, and each pipeline further comprises:
a block culling stage;
a forward warping stage;
a visibility splatting stage;
a texture filtering stage;
a view dependent shading stage; and
an image reconstruction and antialiasing stage.
13. The apparatus of claim 12 wherein the block culling stage uses visibility cones.
14. The apparatus of claim 12 wherein the forward warping stage warps the n-tuples in object space to the image plane having an arbitrary selected orientation.
15. The apparatus of claim 12 wherein each n-tuple octree is warped independently.
16. The apparatus of claim 11 further including a depth buffer, and wherein a particular n-tuple is projected onto a corresponding pixel of the depth buffer, and a depth value of the projected n-tuple is stored in the pixel only if the depth value of the projected n-tuple is less than a previously stored depth value of the pixel.
17. The apparatus of claim 12 wherein the shade attributes include a plurality of texture mipmap maps, and wherein the texture filtering stage interpolates a plurality of texture mipmap maps to determine colors of the n-tuples.
* * * * *
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/6583787", "len_cl100k_base": 12165, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 23828, "total-output-tokens": 13344, "length": "2e13", "weborganizer": {"__label__adult": 0.0006771087646484375, "__label__art_design": 0.006565093994140625, "__label__crime_law": 0.0005702972412109375, "__label__education_jobs": 0.0009703636169433594, "__label__entertainment": 0.0003418922424316406, "__label__fashion_beauty": 0.0003736019134521485, "__label__finance_business": 0.0003833770751953125, "__label__food_dining": 0.0006060600280761719, "__label__games": 0.00391387939453125, "__label__hardware": 0.01139068603515625, "__label__health": 0.0006418228149414062, "__label__history": 0.0011806488037109375, "__label__home_hobbies": 0.00024509429931640625, "__label__industrial": 0.0015764236450195312, "__label__literature": 0.000530242919921875, "__label__politics": 0.0003528594970703125, "__label__religion": 0.0010633468627929688, "__label__science_tech": 0.446044921875, "__label__social_life": 7.867813110351562e-05, "__label__software": 0.028961181640625, "__label__software_dev": 0.491943359375, "__label__sports_fitness": 0.00034928321838378906, "__label__transportation": 0.0009021759033203124, "__label__travel": 0.0003445148468017578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57139, 0.01406]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57139, 0.82024]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57139, 0.90628]], "google_gemma-3-12b-it_contains_pii": [[0, 818, false], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 1041, null], [1041, 1041, null], [1041, 1041, null], [1041, 1041, null], [1041, 8058, null], [8058, 15354, null], [15354, 22206, null], [22206, 29208, null], [29208, 32938, null], [32938, 40765, null], [40765, 47898, null], [47898, 54461, null], [54461, 57139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 818, true], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 818, null], [818, 1041, null], [1041, 1041, null], [1041, 1041, null], [1041, 1041, null], [1041, 8058, null], [8058, 15354, null], [15354, 22206, null], [22206, 29208, null], [29208, 32938, null], [32938, 40765, null], [40765, 47898, null], [47898, 54461, null], [54461, 57139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57139, null]], "pdf_page_numbers": [[0, 818, 1], [818, 818, 2], [818, 818, 3], [818, 818, 4], [818, 818, 5], [818, 818, 6], [818, 818, 7], [818, 818, 8], [818, 818, 9], [818, 818, 10], [818, 1041, 11], [1041, 1041, 12], [1041, 1041, 13], [1041, 1041, 14], [1041, 8058, 15], [8058, 15354, 16], [15354, 22206, 17], [22206, 29208, 18], [29208, 32938, 19], [32938, 40765, 20], [40765, 47898, 21], [47898, 54461, 22], [54461, 57139, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57139, 0.0459]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
863e624ceaf255a326684cd113f34330529d848a
|
Abstract—Establishing credible thresholds is a central challenge for promoting source code metrics as an effective instrument to control the internal quality of software systems. To address this challenge, we propose the concept of relative thresholds for evaluating metrics data following heavy-tailed distributions. The proposed thresholds are relative because they assume that metric thresholds should be followed by most source code entities, but that it is also natural to have a number of entities in the “long-tail” that do not follow the defined limits. In the paper, we describe an empirical method for extracting relative thresholds from real systems. We also report a study on applying this method in a corpus with 106 systems. Based on the results of this study, we argue that the proposed thresholds express a balance between real and idealized design practices.
Index Terms—Source code metrics; Relative thresholds; Software quality; Software measurement.
I. INTRODUCTION
Since the inception of the first programming languages we are witnessing the proposal of a variety of metrics to measure source code properties like size, complexity, cohesion and coupling [1, 2, 3]. However, metrics are rarely used to control in an effective way the quality of software products. To promote the use of metrics as an effective measurement instrument, it is essential to establish credible thresholds [4, 5, 6]. In this way, software quality managers can rely on metrics, for example, to certify new components or to monitor the degradation in quality that happens due to software aging.
Typically, metric thresholds are defined based on the personal experience of software quality experts. For example, industrial code standards for Java recommend that classes should have no more than 20 methods and that methods should have no more than 75 lines of code [7]. Recently, Alves et al. proposed a more transparent method to derive thresholds from benchmark data [4]. They illustrated the application of the method in a large software corpus and derived, for example, thresholds stating that methods with McCabe complexity above 14 should be considered as very-high risk.
However, it is well-known that source code metric values usually follow heavy-tailed distributions [8, 9]. Therefore, in most systems it is “natural” to have source code entities not following the proposed thresholds for several reasons, including complex requirements, performance optimizations, machine-generated code, etc. In the particular case of coupling for example, recent studies show that high coupling is never entirely eliminated from software design and that in fact some degree of high coupling might be quite reasonable [10].
Inspired by such findings, we claim in this paper that absolute thresholds should be complemented by a second piece of information, denoting the percentage of entities the upper limit should be applied to. More specifically, we propose the concept of relative thresholds for evaluating source code metrics, which have the following format:
\[ p\% \text{ of the entities should have } M \leq k \]
where \( M \) is a source code metric calculated for a given software entity (method, class, etc), \( k \) is the upper limit, and \( p \) is the minimal percentage of entities that should follow this upper limit. For example, a relative threshold can state that “85% of the methods should have McCabe \( \leq 14 \)”. Essentially, this threshold expresses that high-risk methods may impact the quality of a system when they represent more than 15% of the whole population of methods.
Our central contribution in this paper is the proposal of an empirical method to derive relative thresholds based on a statistical analysis of a software corpus and attempting to balance two forces. First, the derived relative thresholds should reflect real design rules, widely followed by the systems in the considered corpus. Second, the derived relative thresholds should not be based on rather lenient upper limits. For example, a threshold stating that “95% of the classes should have less than 100 attributes” is probably satisfied by most systems, since it is based on a very high number of attributes. For this reason, relative thresholds should also reflect idealized design rules, based on widely accepted quality principles [3].
The paper starts by describing the method proposed to extract relative source code metric thresholds (Section II) and follows by illustrating its application in a small scenario (Section III). Next, we report an extensive study, where we used our method to derive relative thresholds from 106 Java-based systems (Sections IV-A to IV-C). In this study, we also evaluated the following variations regarding the use of the proposed method: (a) its application in a subcorpus of the original corpus including systems sharing a common functional domain (Sections IV-D); (b) a historical analysis where we retrospectively evaluated the derived thresholds in previous versions of a subset of the systems in the corpus (Section IV-E); (c) an inequality analysis, where we evaluated the dispersion of the metric values among...
the classes that respect the proposed thresholds (Section IV-F). In the paper, we also discuss the main properties and limitations of the proposed method (Section V). Finally, we present related work (Section VI) and the conclusion (Section VII).
II. RELATIVE THRESHOLDS
This section presents in details the proposed method to extract relative source code metric thresholds. An illustrative example of its usage is presented in the next section.
**Goal:** We target metric values that follow heavy-tailed distributions, when measured at the level of classes (although it is straightforward the application to other source code entities, like methods, packages, etc). Basically, the goal is to derive relative thresholds with the following format:
\[ \text{\% of the classes should have } M \leq k \]
where \( M \) is a given source code metric and \( p \) is the minimal percentage of classes in each system that should respect the upper limit \( k \). Therefore, this relative threshold tolerates \((100 - p)\%\) of classes with \( M > k \).
**Input:** First, we assume that the values of \( p \) and \( k \) that characterize a relative threshold for a metric \( M \) should emerge from a curated set of systems, which we call our Corpus.
Furthermore, to derive the values of \( p \) and \( k \) the proposed method relies on two constants, \( \text{Min} \) and \( \text{Tail} \), which are used to drive the method towards providing some quality confidence to the derived thresholds. More specifically, these constants are used to convey the notions of real and idealized design rules, respectively. On the other hand, we define that real design rules should be followed by at least \( \text{Min\%} \) of the systems in the considered Corpus \((0 < \text{Min} \leq 100)\). On the other hand, to express the notion of idealized design rules, we first consider that the values of \( M \) in the Corpus should follow a heavy-tailed distribution, which is common in the case of source code metrics \([8, 9]\). We also assume that the tail of the distribution starts at the \( \text{Tail}\)-th percentile of the values of \( M \) in each system \( S \) in the Corpus \((0 < \text{Tail} \leq 100)\).
In other words, since the distributions are heavy-tailed, we expect to have classes with very high-values for any metric \( M \) (e.g., classes with more than 100 attributes). Although such classes are “natural”, they do not represent an “ideal” class.
**Method:** Figure 1 defines the functions used to calculate the parameters \( p \) and \( k \) that define the relative threshold for a given metric \( M \). First, the function \( \text{ComplianceRate}[p,k] \) returns the percentage of systems in the Corpus that follows the relative threshold defined by the pair \([p,k]\). Moreover, the method aims to find the values of \( p \) and \( k \) that produce a \( \text{ComplianceRate} \) with a minimal penalty. More specifically, we penalize a compliance rate function in two situations:
- A \( \text{ComplianceRate}[p,k] \) less than \( \text{Min\%} \) receives a penalty proportional to its distance to this value, as defined by function \( \text{penalty}_1[p,k] \). As mentioned, the proposed thresholds should reflect design practices that are widely common in the Corpus. Therefore, this penalty formalizes this guideline, by fostering the selection of thresholds followed by at least \( \text{Min\%} \) of the systems in the considered Corpus.
- As mentioned, we assume that in each system the classes with high values of \( M \) correspond to \( \text{Tail\%} \) of the classes. Moreover, we define that \( \text{Tail}[S]\) is an array with the \( \text{Tail}\)-th percentile of the values of \( M \) in each system \( S \) in the Corpus; and we call \( \text{MedianTail} \) the median of the values in \( \text{Tail}[S] \). We assume that \( \text{MedianTail} \) is an idealized upper value for \( M \), i.e., a value representing classes that, although present in most systems, have very high values of \( M \). Therefore, a given \( \text{ComplianceRate}[p,k] \) receives a second penalty proportional to the distance between \( k \) and \( \text{MedianTail} \), as defined by function \( \text{penalty}_2[k] \).
As defined in Figure 1, the final penalty of a given threshold is the sum of \( \text{penalty}_1[p,k] \) and \( \text{penalty}_2[k] \), as defined by function \( \text{ComplianceRatePenalty}[p,k] \). Finally, the relative threshold is the one with the lowest \( \text{ComplianceRatePenalty}[p,k] \). In case of ties, we select the result with the highest \( p \) and then the one with the lowest \( k \).
III. ILLUSTRATIVE EXAMPLE
This section illustrates our method by reporting the derivation of thresholds for the Number of Attributes (NOA) metric, considering the systems in the Qualitas Corpus [11]. In order to derive the relative threshold for NOA, we will consider the following parameters:
- \( \text{Min} = 90\% \), i.e., we penalize thresholds that are not followed by at least 90\% of the systems in the corpus.
- \( \text{Tail} = 90\% \), i.e., we penalize thresholds whose upper limits are greater than the median of the 90th percentile, regarding the NOA values in each system.
Considering these parameters, Figure 2 plots the values of the \( \text{ComplianceRate} \) function, for different values of \( p \) and \( k \). As expected, \( \text{ComplianceRate} \) is a monotonically increasing function, on the values of \( k \). Moreover, as we increase \( p \) the function starts to present a slower growth.

Figure 2 shows the importance of our second penalty. For example, we can check that \( \text{ComplianceRate}[85, 17] = 100\% \), i.e., in 100\% of the systems at least 85\% of the classes have \( \text{NOA} \leq 17 \). However, in this case \( \text{MedianTail} = 9 \), i.e., the median of the 90th percentile for the NOA values in the considered systems is nine attributes. Therefore, the relative threshold defined by the pair \([85, 17]\) relies on a high value for \( k (k = 17) \) to achieve a compliance rate of 100\%. To penalize a threshold like that, the value of \( \text{penalty}_2 \) is \((17 - 9) / 9 = 0.89 \). Since \( \text{penalty}_1 = 0 \) (due to the 100\% of compliance), we have that \( \text{ComplianceRatePenalty}[85, 17] = 0.89 \).
As can be observed in Figure 3, \( \text{ComplianceRatePenalty} \) returns zero for the following pairs \([p, k]\):
\([75, 7]\) \( \ [75, 8]\) \( [75, 9] \) \( [80, 8] \) \( [80, 9] \)
Based on our tiebreaker criteria, we select the result with the highest \( p \) and the lowest \( k \) (in bold above), which leads to the following relative threshold:
\[ 80\% \text{ of the classes should have } \text{NOA} \leq 8 \]
This threshold represents a balance between the two forces the method aims to balance. First, it reflects a real design rule, followed by most systems in the considered corpus (in fact, it is followed by 98 out of 106 systems). Second, it is not based on rather lenient upper bounds. In other words, limiting NOA to eight attributes is compatible with an idealized design rule. For example, there are thresholds proposed by experts that recommend an upper limit of 10 attributes [7].
To illustrate the classes that do not follow the proposed relative threshold, Table I presents the ten classes with the highest number of attributes in our corpus (considering the 98 systems that follow the proposed threshold and only the biggest class of each system). As we can observe, classes with high NOA values are usually Data Classes [12], used to store global constants, like error messages in the AspectJ compiler or bytecode opcodes in the case of the Jasml disassembler.

<table>
<thead>
<tr>
<th>System</th>
<th>Class</th>
<th>NOA</th>
</tr>
</thead>
<tbody>
<tr>
<td>GeoTools</td>
<td>gml3.GML</td>
<td>907</td>
</tr>
<tr>
<td>JasperReports</td>
<td>engine.xml.JRXmlConstants</td>
<td>600</td>
</tr>
<tr>
<td>Xalan</td>
<td>templates.Constants</td>
<td>334</td>
</tr>
<tr>
<td>Derby</td>
<td>impl.dira.CodePoint</td>
<td>324</td>
</tr>
<tr>
<td>AspectJ</td>
<td>core.util.Messages</td>
<td>317</td>
</tr>
<tr>
<td>Jasml</td>
<td>classes.Constants</td>
<td>301</td>
</tr>
<tr>
<td>POI</td>
<td>ddf.EscherProperties</td>
<td>275</td>
</tr>
<tr>
<td>DrJava</td>
<td>ui.MainFrame</td>
<td>266</td>
</tr>
<tr>
<td>RSSOwl</td>
<td>internaldialogs.Messages</td>
<td>225</td>
</tr>
<tr>
<td>MegaMek</td>
<td>ui.swing.RandomMapDialog</td>
<td>216</td>
</tr>
</tbody>
</table>
In the corpus there are eight systems (7.5\%) that do not follow the relative threshold. For example, in the JMoney system, 39.3\% of the classes have more than 8 attributes. In this system, except for a single class, all other classes with NOA > 8 are related to GUI concerns. For example, the AccountEntriesPanel class has 37 attributes, including 25 attributes with types provided by the Swing framework. Another non-compliant system is JTopen, a middleware for accessing applications running in IBM AS/400 hardware platforms. In this case, we counted 414 classes (25.2\%) with NOA > 8, which basically implement the communication protocol with the AS/400 operating system. Therefore, the non-compliant behavior is probably due to the complexity of JTopen’s domain.
IV. EXTENSIVE STUDY
In this section, we report an extensive study, through which we apply our method to extract relative thresholds for seven source code metrics. This study also includes a subcorpus analysis (Section IV-D), a historical analysis (Section IV-E), and an inequality analysis (Section IV-F).
A. Metrics
In this study, we used seven metrics related to distinct factors affecting the internal quality of object-oriented systems: Number of methods (NOM), Number of Lines of Code (LOC), FAN-OUT, Response For a Class (RFC), Weighted Method Count (WMC), Lack of Cohesion in Methods (LCOM), and the ratio between Number of Public Attributes and Number of Attributes (PUBA/NOA).
B. Dataset and Study Setup
We used the Qualitas Corpus (version 20101126r), which is a dataset with 106 open-source Java-based systems, specially created for empirical research in software engineering [11]. Besides, we used the Moose platform to compute the values of the metrics for each class of each system [13]. Particularly, we use VerveineJ—a Moose application—to parse the source code of each system and to generate MSE files, which is the format supported by Moose to persist source code models. We also implemented a tool that receives as input CVS files with the metric data generated by Moose and computes the relative thresholds, using the method described in Section II.
Although the literature reports that object-oriented metrics usually follow heavy-tailed distributions [8, 10], we decided to check ourselves whether the metric values we extracted present this behavior. For this purpose, we used the EasyFit tool\(^1\) to reveal the distribution that best describes our values. We configured EasyFit to rely on the Kolmogorov-Smirnov Test to compare our metrics data against reference probability distributions. Following a classification suggested by Foss et al [14], we considered the metric values extracted for the classes of a given system as heavy-tailed when the “best-fit” distribution returned by EasyFit was Power Law, Weibull, Lognormal, Cauchy, Pareto or Exponential. Table II reports the percentage of systems whose metric values were classified as heavy-tailed. The extracted values followed heavy-tailed distributions in at least 94.1% of the systems in our corpus.
<table>
<thead>
<tr>
<th>Metrics</th>
<th>% Heavy-Tailed</th>
</tr>
</thead>
<tbody>
<tr>
<td>NOM</td>
<td>100.0</td>
</tr>
<tr>
<td>LOC</td>
<td>96.2</td>
</tr>
<tr>
<td>FAN-OUT</td>
<td>99.1</td>
</tr>
<tr>
<td>RFC</td>
<td>99.1</td>
</tr>
<tr>
<td>WMC</td>
<td>100.0</td>
</tr>
<tr>
<td>PUBA/NOA</td>
<td>98.1</td>
</tr>
<tr>
<td>LCOM</td>
<td>94.1</td>
</tr>
</tbody>
</table>
Figure 4 shows the quantile functions for the considered metric values. In this figure, the x-axis represents the quantiles and the y-axis represents the upper metric values for the classes in the quantile. The figure visually shows that the considered metric values follow heavy-tailed distributions, with most systems having classes with very high metric values in the last quantiles. We can also observe systems with an outlier behavior, due to the presence of high-metrics values even in intermediary quantiles (e.g., 50th or 60th quantiles).
\(^1\)http://www.mathwave.com/products/easyfit.html
C. Extracted Relative Thresholds
Table III shows the relative thresholds derived by our method, considering the same parameters as the ones described in Section III. For each metric, the table shows the values of $p$ and $k$ that characterize the relative thresholds. The table also shows the number of outliers generated by the thresholds, i.e., the number of systems that do not conform to the thresholds. Table IV shows the name of such systems.
<table>
<thead>
<tr>
<th>Metrics</th>
<th>p</th>
<th>k</th>
<th># Outliers</th>
</tr>
</thead>
<tbody>
<tr>
<td>NOM</td>
<td>80</td>
<td>16</td>
<td>5</td>
</tr>
<tr>
<td>LOC</td>
<td>75</td>
<td>222</td>
<td>11</td>
</tr>
<tr>
<td>FAN-OUT</td>
<td>80</td>
<td>15</td>
<td>9</td>
</tr>
<tr>
<td>RFC</td>
<td>80</td>
<td>49</td>
<td>11</td>
</tr>
<tr>
<td>WMC</td>
<td>80</td>
<td>32</td>
<td>10</td>
</tr>
<tr>
<td>PUBA/NOA</td>
<td>75</td>
<td>0.1</td>
<td>8</td>
</tr>
<tr>
<td>LCOM</td>
<td>80</td>
<td>36</td>
<td>12</td>
</tr>
</tbody>
</table>
Table IV shows the name of such systems.
We claim again that the proposed relative thresholds represent a commitment between real and idealized design rules. They express real design practices because they are widely followed by the systems in our corpus. In fact, the number of systems with an outlier behavior ranged from five systems (NOM) to twelve systems (LCOM). The proposed thresholds seem also to represent idealized design rules, as can be observed by the values suggested to the upper limit $k$. For example, well-known Java code standards recommend that classes should have no more than 20 methods [7] and our method suggested an upper limit of 16 methods.
However, the balance between real and idealized design rules was achieved by accepting that the thresholds are valid for a representative number of classes, but not for all classes in a system. In fact, the suggested upper limits apply to a percentage $p$ of classes ranging from 75% (LOC and PUBA/NOA) to 80% (NOM, FAN-OUT, RFC, WMC, and LCOM).
For each metric, we manually analyzed the classes located in the “tail” of the considered distributions. Regarding NOM, we concluded that there are at least three categories of classes with many methods. First, many systems have classes that are automatically generated by parser generators or similar tools. Second, we found that GUI classes—particularly classes that redefine methods inherited from Swing base classes—also tend to have high NOM values. Finally, there are systems that follow an architecture based on a kernel, whose classes typically have many methods and tend to be classified as God Classes [12]. As expected, we also found out that classes with many methods, tend to have high-values for FAN-OUT, LOC, WMC, RFC, and LCOM. As for the PUBA/NOA, the derived threshold defines that 75% of the classes should have at most 10% of public attributes. By manually inspecting classes which do not follow this threshold, we concluded that generally the public fields in such classes, in fact, represent constants, i.e., they are also static and final.
D. Relative Thresholds for a Subcorpus
To evaluate the impact of the corpus in our results, we recalculated the relative thresholds for a subset of the Qualitas Corpus. Basically, in this subcorpus there are 26 systems (24.5%) classified as Tools in the original corpus (which is the most common domain category in the Qualitas Corpus). Table V presents the relative thresholds and the outliers systems, considering this subcorpus as our benchmark data.
In the subcorpus, with the exception of LCOM, the thresholds rely on relatively high values for $k$. In the same way, for FAN-OUT and PUBA/NOA, the $p$ parameter also relies on slightly high values. For example, the original threshold regarding all systems in the corpus is as follows:
$$80\% \text{ of the classes should have } \text{FAN-OUT} \leq 15$$
The same threshold for Tools states the following:
$$85\% \text{ of the classes should have } \text{FAN-OUT} \leq 20$$
corpus (Table IV). On the other hand, some systems initially considered outliers in the whole corpus were not classified as such when we restricted the analysis to the subcorpus. For example, Weka was an outlier for FAN-OUT in the whole corpus, but not in the subcorpus. This observation reinforces the importance of the corpus in methods to extract thresholds empirically from real systems. It also shows that our method was able to reclassify the systems as expected, i.e., when moving from a general to a more homogeneous corpus, some systems were reclassified, but always changing their status from outliers to non-outliers.
### E. Historical Analysis
To evaluate whether the proposed thresholds are valid in different versions of the systems under analysis, we performed a historical analysis, considering previous versions of five systems. In this analysis, we considered only the NOM, FAN-OUT, WMC, and PUBA/NOA metrics. Table VI describes the systems and their versions considered in this analysis. Basically, we selected four systems (Lucene, Hibernate, Spring, and PMD) included both in the Qualitas Corpus and in the COMETS Dataset, which is a dataset for empirical studies on software evolution [15]. Essentially, COMETS provides time series for metrics values in intervals of bi-weeks. We extended this dataset to include time series on a new system (Weka), in order to support the analysis also on an outlier system, regarding the NOM, FAN-OUT, and WMC metrics. In Table VI, the period considered in the extraction ends exactly in the bi-week just before the release of the version available in the Qualitas Corpus, i.e., the version we considered to extract the relative thresholds.
<table>
<thead>
<tr>
<th>System</th>
<th>Period</th>
<th>Versions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lucene</td>
<td>01/01/2005—10/04/2008</td>
<td>99</td>
</tr>
<tr>
<td>Hibernate</td>
<td>06/13/2007—10/10/2010</td>
<td>82</td>
</tr>
<tr>
<td>PMD</td>
<td>06/22/2002—14/08/2009</td>
<td>175</td>
</tr>
<tr>
<td>Weka</td>
<td>11/16/2008—07/09/2010</td>
<td>45</td>
</tr>
</tbody>
</table>
Figure 6 plots the percentage of classes in each version and system considered in this analysis respecting the proposed upper limit (parameter $k$) in the relative thresholds. We can observe that the proposed thresholds seem to capture an enduring design practice in the considered systems. More specifically, the systems not initially considered as outliers (PMD, Spring, Lucene, and Hibernate) presented the same behavior since the first considered version, in the case of the four metrics. A similar observation holds in the case of Weka. Along the extracted versions, this system did not change its status, both in the case of the metrics it was classified as an outlier (NOM, FAN-OUT, and WMC), and also for the metric it is not an outlier (PUBA/NOA).
### F. Inequality Analysis
We evaluated the dispersion of the metric values in the systems respecting the proposed thresholds, using the Gini coefficient. Gini is a coefficient widely used by economists to express the inequality of income in a population [16]. The coefficient ranges between 0 (perfect equality, when everyone has exactly the same income) to 1 (perfect inequality, when a single person concentrates all the income). Gini has been applied in the context of software evolution and software metrics [16, 17], although not exactly to evaluate the reduction in inequality achieved by following metric thresholds.
In the analysis we will consider the distributions of NOM values in the original corpus. First, we calculated the Gini coefficient considering the whole population of classes in each system. Next, we recalculated the coefficient for the classes respecting the upper threshold of 16 methods. In both cases, we excluded the systems with an outlier behavior, since our goal is to reveal the degree of inequality in systems respecting our approach. The boxplots in Figure 7 summarizes the Gini results in our systems. As we can observe, the median Gini coefficient considering the whole population of classes in each system is 0.52. By considering only classes with 16 or less methods, the median coefficient is reduced to 0.46. In fact, this reduction in dispersion is expected, since we removed the high values in the long tail.
We also analyzed the outliers in the sample filtered by the proposed threshold. As observed in the right boxplot in Figure 7,
we have outliers due to very equal distributions (Gini < 0.35) and also outliers due to very unequal distributions (Gini > 0.55). For example, JParser is an example of the first case (Gini=0.26) and CheckStyle is an example of the second case (Gini=0.60). Figure 8 shows the quantile functions for these two systems. We can see that most classes in JParser respecting the proposed threshold have between 5 to 10 methods, while in CheckStyle we have a representative number of classes with less than five methods, between 5 to 10 methods, and also with more than 10 methods.
Although JParser and CheckStyle have very different Gini coefficients, we can not claim they are outliers in terms of software quality. In other words, a system with classes ranging from 5 to 10 methods (JParser) seems to be not very different than a system having classes with 1 to 16 methods (Checkstyle), at least in terms of their internal software quality.
Therefore, as revealed by the Gini coefficients, the inequality analysis shows that there are different distributions of methods per class among the systems that follow the proposed thresholds. However, such differences do not seem to have major impacts in terms of software quality. More specifically, at least in our corpus, we have not found degenerate distributions, both in terms of equality or inequality, e.g., a system with all classes having exactly a single method or a system with half of the classes having $k-1$ methods and half of the classes having very few methods. Although such distributions may respect our thresholds, they would certainly reveal serious design problems. On the other hand, it is hard to believe that distributions like that are possible in practice.
G. Threats to Validity
In this extended study, we have not manually analyzed the systems in the Qualitas Corpus to remove for example test classes or classes generated automatically by tools like parsers generators, as usually recommended [18]. However, our central goal with this study was not establishing an industrial software quality benchmark, but to illustrate the use of our method in a real software corpus, including a discussion on its main properties, like sensitivity to the systems in the corpus and historical validity of the extracted thresholds. Considering such goals, we consider that the removal of test classes and classes generated automatically is less critical.
V. Discussion
In this section, we discuss our method considering four aspects: (a) adherence to requirements proposed to evaluate metrics aggregation techniques; (b) robustness to staircase effects; (c) tolerance to bad smells; and (d) statistical properties.
A. Requirements
Mordal et al. defined a set of requirements to characterize software metrics aggregation techniques [19]. We reused these categories to discuss our method mainly because metric aggregation and metric thresholds ultimately share the same goal, i.e., to support quality assessment at the level of systems.
In the following discussion, we consider the two most important categories in this characterization (must and should requirements).
Must Requirements:
- **Aggregation**: Relative thresholds can be used to aggregate low level metric values (typically in the level of classes) and therefore to evaluate the quality of an entire project.
- **Composition**: In our method, metrics should be first composed and then aggregated in the form of a relative threshold. For example, PUBA/NOA—used in the study in Section IV—is an example of a composed metric.
Should Requirements:
- **Highlight problems**: By their very nature, relative thresholds can indicate design problems under accumulation in the classes of object-oriented systems.
- **Do not hide progress**: The motivation behind this requirement is to reveal typical problems when using aggregation by averaging. More specifically, averages may fail due to a tendency to hide outliers. On the other hand, we argue that our method automatically highlights the presence of outliers above an expected value.
- **Decomposability**: Given a partition of the system under evaluation, it is straightforward to select the partitions that concentrate more classes not respecting the proposed thresholds. Possible partition criteria include package hierarchy, programming language, maintainers, etc.
- **Composition before Aggregation**: As explained before, metrics should be composed first to preserve the intended semantic of the composition.
- **Aggregation Range**: This requirement establishes that the aggregation should work in a continuous scale, preferably left and right-bounded. In fact, our relative thresholds can be viewed as predicates that are followed or not by a given system. Therefore, we do not strictly follow this requirement. We discuss the consequence of this fact in Section V-B.
- **Symmetry**: Our final results do not depend on any specific order, i.e., the classes can be evaluated in any order.
B. Staircase Effects
Staircase effects are a common drawback of aggregation techniques based on thresholds [19]. In our context, these effects denote the situation where small refactorings in a class may imply in a change of threshold level, while more important ones do not elevate the class to a new category. To illustrate the scenario, suppose a system with \( n \) classes not following a given relative threshold. Suppose also that by refactoring a single class the system will start to follow the threshold. Although the scenarios before and after the refactoring are not very different regarding the global quality of the system, after the refactoring the system’s status changes, according to the proposed threshold. Furthermore, when deciding which class to refactor, it is possible that a maintainer just selects the class more closer to the upper parameter of the relative threshold (i.e., the “easiest” class to refactor).
Although subjected to staircase effects, we argue that any evaluation based on metrics—including the ones considering continuous scales—are to some extent subjected to quality treatments. In fact, treating values is a common pitfall when using metrics, which can only be avoided by making developers aware of the goals motivating their adoption [20].
C. Tolerance to Bad Smells
Because the thresholds tolerate a percentage of classes with high metric values, it is possible that they in fact represent bad smells, like God Class, Data Class, etc. [12]. However, when limited to a small number of classes—as required by our relative thresholds—our claim is that bad smells do not constitute a threat to the quality of the entire project nor an indication of an excessive technical debt. Stated otherwise, our goal is to raise quality alerts when bad smells change their status towards a disseminated and recurring design practice.
D. Statistical Properties
In the method to extract relative thresholds, the median of a high percentile (as defined by the \( Tail \) parameter) is used to penalize upper limits that do not reflect the accepted semantics for a given metric values. We acknowledge that the use of the median in this case is not strictly recommended, because we never checked whether the \( Tail \)-th percentiles follow a normal distribution. However, our intention was not to compute an expected value for the statistical distribution, but simply to penalize compliance rates based on lenient upper limits, i.e., limits that are not observed at least in half of the systems in our corpus.
VI. RELATED WORK
In this section, we discuss work related to our method following a division in two groups: (a) thresholds definitions; and (b) statistical analysis.
A. Thresholds Definitions
Alves et al. proposed an empirical method to derive threshold values for source code metrics from a benchmark of systems [4]. Their ultimate goal was to use the extract thresholds to build a maintainability assessment model [21, 22]. In the proposed method, metric values for a given program entity are first weighted according to the size of the entities in terms of lines of code (LOC), in order to generate a new distribution where variations in the metrics values are more clear. After this step, the method relies on quality profiles to rank entities according to four categories: low risk (0 to 70th percentiles), moderate risk (70th to 80th percentiles), high risk (80th to 90th percentiles), and very-high risk (90th percentile). In a more recent work, Alves et al. improved their method to include the calibration of mappings from code-level measurements to system-level ratings, using an N-point rating system [23]. On the other hand, in our method, we do not use weighting by LOC, and we have only two profile categories (respecting or not the relative thresholds). Moreover, our goal is the extraction of relative thresholds that by construction tolerate high-risk classes, assuming they are natural in heavy-tailed distributions. However, high-risk classes should not exceed a percentage of the whole population of classes.
Ferreira et al. defined thresholds for six object-oriented metrics using a benchmark of 40 Java systems [5]. Using the EasyFit tool, they also concluded that the metric values, except for DIT, follow heavy-tailed distributions. After this conclusion, the authors relied on their own experience to establish three threshold ranks: i) good, which refers to most common values; ii) regular, which refers to values with low frequency, but that are not irrelevant; and iii) bad, which refers to values with rare occurrences. However, they do not establish the percentage of classes tolerated in these categories.
Chidamber et al. analysed a set of metrics in order to assess their usefulness for practicing managers [24]. For this, they relied on empirical data relating the metrics to productivity, rework effort, and design effort on three commercial object-oriented systems. In common with our work, the authors propose that threshold values should not be defined a priori, but rather should come from benchmark data. Following Pareto’s 80/20 heuristic, they established that a “high” value for a metric can be defined no lower than the 80th percentile. In contrast, our method does not use a defined percentile as threshold and we also generate a single threshold for the entire benchmark.
Herbold et al. used a machine learning algorithm for calculating thresholds [6]. In their approach, classes and methods are classified as respecting or not the computed thresholds. Shatnawi et al. [25] and Catal et al. [26] used Receiver-Operating Characteristic curves (ROC curves) to derive thresholds. Shatnawi et al. derived thresholds to predict the existence of bugs in different error categories using three releases of the Eclipse project. Catal et al. proposed a noise detection algorithm based on software metric thresholds values. On the other hand, Yoon et al. [27] used the K-means Clustering algorithm to derive metric thresholds. However, this algorithm requires an input parameter that affects both the performance and the accuracy of the results. Nevertheless, in all these cases the proposed thresholds are absolute. In contrast, our method derives relative thresholds.
B. Statistical Analysis
Wheedon et al. analyzed 11 coupling metrics using three Java systems and concluded that their values follow a Power Law distribution [28]. Baxter et al. [8] analyzed 17 metrics in 56 Java systems for verifying their internal structure. The authors reported that most metrics follow power-laws. Louridas et al. analyzed coupling metrics using 11 systems developed in multiple languages (C, Perl, Ruby, and Java) [9]. The authors concluded that most metrics are in conformity with heavy-tailed distributions, independently of programming language. Studies conducted by Potanin et al. [29] and Taube-Schock et al. [10] confirm such results, but for coupling metrics. Concis et al. analyzed 10 metrics using three large Java and Smalltalk systems [30]. Their findings indicate that large OO systems also follow heavy-tailed distributions.
In summary, the aforementioned studies suggest that non-Gaussian distributions are common in the case of source code metric values. Therefore, our method does not assume a distribution that is rarely observed in real-world systems, specifically in the case of source code metrics. On the other hand, such studies do not propose a clear roadmap to apply their findings in practical software quality assessments.
VII. CONCLUSION
Source code metric values usually follow heavy-tailed distributions [8, 9]. Therefore, it is natural to observe in every system a percentage of classes not respecting a given threshold. In this paper, we proposed the notion of relative thresholds to deal with such metric distributions. Our approach explicitly indicates that thresholds should be valid for most but not for all classes in object-oriented systems. We proposed a method that extracts relative thresholds from benchmark data and we evaluated this method in the Qualitas Corpus. We argued that the extracted thresholds represent an interesting balance between real and idealized design rules.
We envision a scenario where the proposed relative thresholds are used to measure the technical debt in a system [31]. Resembling the notion of financial debt, this metaphor describes the “debt” incurred when developers make quick-and-dirty implementations, targeting a short-term solution. However, in the long term, an accumulated technical debt may cause relevant maintenance costs [32]. On the other hand, it is widely accepted that technical debt can not be completely avoided, although it should always be explicit [33]. In this context, we consider that the notion of relative thresholds can be used to control and monitor the technical debt in a system and to raise an
alarm when the debt reaches a dangerous level, i.e., when the proposed relative thresholds are violated.
Also as future work, we have plans to apply our approach on new software metrics, including non-source code based metrics, such as process metrics. We intend to evaluate our approach with other systems, possibly using the portfolio of a real software development organization. We also intend to extract relative thresholds for different contexts, such as systems of different sizes and systems implemented in different programming languages. Finally, we intend to investigate the impact of not following our thresholds on software quality properties, like maintainability.
ACKNOWLEDGMENTS
Our research is supported by CAPES, FAPEMIG, and CNPq.
REFERENCES
|
{"Source-Url": "http://www.researchgate.net/profile/Marco_Valente2/publication/259081954_Extracting_Relative_Thresholds_for_Source_Code_Metrics/links/02e7e529f0c3384c85000000.pdf", "len_cl100k_base": 9178, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34384, "total-output-tokens": 11740, "length": "2e13", "weborganizer": {"__label__adult": 0.00034880638122558594, "__label__art_design": 0.00027179718017578125, "__label__crime_law": 0.0002903938293457031, "__label__education_jobs": 0.00064849853515625, "__label__entertainment": 4.363059997558594e-05, "__label__fashion_beauty": 0.0001327991485595703, "__label__finance_business": 0.0001957416534423828, "__label__food_dining": 0.0002980232238769531, "__label__games": 0.00041961669921875, "__label__hardware": 0.0004856586456298828, "__label__health": 0.00036716461181640625, "__label__history": 0.0001627206802368164, "__label__home_hobbies": 5.888938903808594e-05, "__label__industrial": 0.0002474784851074219, "__label__literature": 0.0002574920654296875, "__label__politics": 0.0002046823501586914, "__label__religion": 0.0003495216369628906, "__label__science_tech": 0.00478363037109375, "__label__social_life": 7.396936416625977e-05, "__label__software": 0.003726959228515625, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002422332763671875, "__label__transportation": 0.00029730796813964844, "__label__travel": 0.00016295909881591797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46482, 0.04147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46482, 0.34405]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46482, 0.89937]], "google_gemma-3-12b-it_contains_pii": [[0, 5163, false], [5163, 9804, null], [9804, 14328, null], [14328, 17518, null], [17518, 21317, null], [21317, 25695, null], [25695, 27420, null], [27420, 33209, null], [33209, 39546, null], [39546, 46482, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5163, true], [5163, 9804, null], [9804, 14328, null], [14328, 17518, null], [17518, 21317, null], [21317, 25695, null], [25695, 27420, null], [27420, 33209, null], [33209, 39546, null], [39546, 46482, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46482, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46482, null]], "pdf_page_numbers": [[0, 5163, 1], [5163, 9804, 2], [9804, 14328, 3], [14328, 17518, 4], [17518, 21317, 5], [21317, 25695, 6], [25695, 27420, 7], [27420, 33209, 8], [33209, 39546, 9], [39546, 46482, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46482, 0.20219]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ea430bbdb90d4b20b01e728cf8bc586f96a90ce5
|
Evolutionary Fusion: A Customer-Oriented Incremental Life Cycle for Fusion
Creating and maintaining a consistent set of specifications that result in software solutions that match customer's needs is always a challenge. A method is described that breaks the software life cycle into smaller chunks so that customer input is allowed throughout the process.
by Todd Cotton
Fusion provides a thorough and consistent set of models for translating the specification of customer needs into a well-structured software solution. For reasonably small projects, the sequential steps of Fusion map well into the sequential software life cycle commonly known as the waterfall life cycle. For larger projects, those representative of most commercial and IT software projects today, an incremental life cycle such as Evolutionary Development provides a much better structure for managing the risks inherent in complex software development. This paper introduces Evolutionary Fusion, the combination of Fusion, with its advantages provided by object orientation, and the key Evolutionary Development concepts of early, frequent iteration, strong customer orientation, and dynamic plans and processes.
Although based on the best of other object-oriented methods, Fusion is a relatively new method. The Fusion text was published in October 1994, and as a member of the Hewlett-Packard software development community, the author was exposed to preliminary work by Derek Coleman and his team earlier in 1993. The response from the first few teams to apply Fusion to their work was extremely encouraging. As members of the Software Initiative, an internal consulting group focused on further extending Hewlett-Packard's software development competencies, the author and his colleagues have helped facilitate the rapid adoption of Fusion within Hewlett-Packard. Fusion is now used in nearly every part of Hewlett-Packard, contributing to products and services as diverse as network protocol drivers, real-time instrument firmware, printer drivers, internal information systems, and even medical imaging and management products. This paper is based on these collected experiences.
To simplify the presentation of concepts, the paper first discusses experiences gained working with small, collocated development teams. Later sections deal with the extensions that have been made to scale Evolutionary Fusion up for larger teams split across geographic boundaries. See the Sidebar: "What is Fusion?" for an explanation of this software development method.
Need for an Alternative to the Waterfall Life Cycle
The traditional waterfall life cycle for software development has served software developers well. By breaking software projects up into several large sequential phases—typically an investigation or definition phase, a design phase, an implementation phase, and a test phase—project teams could move forward with confidence. System requirements were captured through significant customer interaction during the definition phase. Once these requirements were complete, the other phases could progress with focus and efficiency since few if any changes to the specification would be allowed. With limited competition and with products that would remain viable for years, it was safe to assume that the system requirements captured many months or even years earlier would still be accurate. Unfortunately, this is no longer the environment in which software is developed.
Today, our ability as software engineers and project managers to accommodate all risks and accurately schedule projects that may include tens or even hundreds of engineers over several years of development is seriously challenged. Customers' needs, competitive products, and even the development tools we use can change as often as every few months. We have at least two choices. We can try to further refine our estimation and scheduling skills, fixing more parameters of our projects at very early stages of knowledge and experience, or we can look for an alternative development life cycle that better supports the dynamic and complex nature of our business today.
One alternative to the waterfall life cycle is Barry Boehm’s spiral life cycle. Actually more of a meta life cycle, the spiral life cycle can be instantiated or “unwrapped” in a number of ways. One instantiation is the iterative life cycle, an approach advocated by industry-leading OO (object-oriented) methodologists such as Jim Rumbaugh and Grady Booch. An iterative life cycle replaces the monolithic implementation phase of the waterfall life cycle with much smaller implementation cycles (Fig. 1) that start by building a very small piece of the overall functionality of the system and then add to this base over time until a complete system is delivered. Incremental development “determines user needs and defines the system requirements, then performs the rest of the development in a sequence of builds.”
**Fig. 1. Different models of the software development life cycle.**
Another instantiation of the spiral life cycle is Evolutionary Development, proposed by Tom Gilb. Evolutionary Development adds to the iterative life cycle a much stronger customer orientation that is implemented through an explicit customer feedback loop. Evolutionary Development “differs from the incremental strategy in acknowledging that the user need is not fully understood and all requirements cannot be defined up front ... user needs and system requirements are partially defined up front, then are refined in each succeeding build.” The Evolutionary Development life cycle has been used successfully within Hewlett-Packard since 1985 and was the natural choice to combine with Fusion when we needed an alternative to the waterfall life cycle.
**Evolutionary Development**
Evolutionary Development (EVO) is a software development method and life cycle that replaces traditional waterfall development with small, incremental product releases or builds, frequent delivery of the product to users for feedback, and dynamic planning that can be modified in response to this feedback. As originally presented by Tom Gilb, the method had the following key attributes:
1. Multiobjective-driven
2. Early, frequent iteration
3. Complete analysis, design, build, and test in each step
4. User orientation
5. Systems approach, not merely algorithm orientation
6. Open-ended basic systems architecture
7. Result orientation, not software development process orientation.
Using EVO, a product development team divides the project into small chunks. Ideally, each chunk is less than 5% of the overall effort. The chunks are then ordered so that the most useful and easiest features are implemented first and some useful subset of the overall product can be delivered every one to four weeks. Within each EVO cycle, the software is designed, coded, tested, and then delivered to users. The users give feedback on the product and the team responds, often by changing the product, plans, or process. These cycles continue until the product is shipped.
EVO is thus characterized by early and frequent iteration, starting with an initial implementation and followed by frequent cycles that are short in duration and small in content. Drawing on ongoing user feedback, planning, design, coding, and testing are completed for each cycle, and each release or build meets a minimum quality standard. This method offers opportunities to optimize results by modifying the plan, product, or process at each cycle. The basic product concept or value proposition, however, does not change.
At Hewlett-Packard, we have found that it is possible to relax some of Gilb's ideas regarding EVO. In particular, it is not absolutely necessary to deliver the product to real customers with customer-ready documentation, training, support, and so on, to benefit from EVO. For instance, customers participating in the feedback loop change during the development process. Results from the early cycles of development are typically given to other team members or other project teams for feedback. Less sensitive to the lack of complete documentation and training materials, they can still give valuable feedback. Results from the next several cycles are shared with surrogate customers represented by members of the broader Hewlett-Packard community. The goal is still to get the product into the hands of actual customers as early as possible.
There are two other variations to Tom Gilb's guidelines that we have found useful within Hewlett-Packard. First, the guideline that each cycle represent less than 5% of the overall implementation effort has translated into cycle lengths of one to four weeks, with two weeks being the most common. Second, ordering the content of the cycles is used within Hewlett-Packard as a key risk-management opportunity. Instead of implementing the most useful and easiest features first, many development teams choose to implement in an order that gives the earliest insight into key areas of risk for the project, such as performance, ease of use, or managing dependencies with other teams.
**Benefits of EVO**
The teams within Hewlett-Packard that have adopted Evolutionary Development as a project life cycle have done so with explicit benefits in mind. In addition to better meeting customer needs or hitting market windows, there have been a number of unexpected benefits, such as increased productivity and reduced risk, even the risks associated with changing the development process.
**Better Match to Customer Need and Market Requirements.** The explicit customer feedback loop of Evolutionary Development results in the delivery of products that better meet the customers’ need. The waterfall life cycle provides an investigation or definition phase for eliciting customer needs through focus groups and storyboards, but it does not provide a mechanism for continual validation and refinement of customer needs throughout the long implementation phase. Many customers find it difficult to articulate the full range of what they want from a product until they have actually used the product. Their needs and expectations evolve as they gain experience with the product. Evolutionary Development addresses this by incorporating customer feedback early and often during the implementation phase. The small implementation cycles allow the development team to respond to customer feedback by modifying the plans for future implementation cycles. Existing functionality can be changed, while planned functionality can be redefined.
One Hewlett-Packard project used a variation of Evolutionary Development that also included an evolutionary approach to product definition. During the first month, the development team worked from static visual designs to code a prototype. In focus group meetings, the team discussed users’ needs and the potential features of the product and then demonstrated their prototype. The focus groups expressed strong support for the product concept, so the project proceeded to a second phase of focus group testing incorporating the feedback from the first phase. Once the feedback from the second round of focus groups was incorporated, the feature set was established and the product definition completed.
Implementation consisted of four-to-six-week cycles, with software delivered to customers for use at the end of each cycle. The entire development effort spanned ten months from definition to product release. The result was a world-class product that has won many awards and has been easy to support.
**Hitting Market Windows.** To enhance productivity, many large software projects divide their tasks into independent subsets that can be developed in parallel. With few dependencies between subteams, each team can progress at its own pace. The risk in this approach is the significant effort that must be invested to bring all the work of these subteams together for final integration and system test. When issues are uncovered at this late stage of development, few options are available to the development team. It is difficult if not impossible to prune functionality in a low-risk manner when market windows, technology, or competition change. The only option open to the team is to continue on, finding and removing defects as quickly and as efficiently as possible (see Fig. 2).
With an EVO approach, the team has greater flexibility as the market window approaches. Two attributes of EVO contribute to this flexibility. First, the sequencing of functionality during the implementation phase is such that “must have” features are completed as early as possible, while the “high want” features are delayed until the later EVO cycles. Second, since each cycle of the implementation phase is expected to generate a “complete” release, much of the integration testing has already been completed. Any of the last several EVO cycles can become release candidates after a final round of integration and system test. When an earlier-than-planned release is needed, the last one or two EVO cycles can be skipped as long as a viable product already exists. If a limited number of key features are still needed, an additional EVO cycle or two can be defined and implemented as illustrated in Fig. 3.
* See also Article 5.
**Engineer Motivation and Productivity.** Some of the gains in productivity seen by project teams using EVO have been attributed to higher engineer motivation. The long implementation phase of the waterfall life cycle is often characterized by large variations in engineer motivation. It is difficult for engineers to maintain peak productivity when it may be months before they can integrate their work with that of others to see real results. Engineer motivation can take an even greater hit when the tyranny of the release date prohibits all but the most trivial responses to customer feedback received during the final stages of system test.
EVO has led to higher productivity for development teams by maintaining a higher level of motivation throughout the implementation phase. The short implementation cycles keep everyone focused on a small set of features and tasks. The explicit customer feedback loop and the small implementation cycles also allow the development team more opportunity to respond to customer feedback and thereby deliver a product that they know represents their best work.
**Quality Control.** Although software development is in many ways a manufacturing process, software development teams have struggled to apply quality improvement processes such as Total Quality Control (TQC). Unlike the manufacturing organizations that can measure and refine processes with cycle times of hours, minutes, and even seconds, the waterfall life cycle gave cycle times of months or years before the software development process repeated. With EVO, the software implementation cycle is dramatically reduced and repeated multiple times for each project. All parameters of the implementation process are now available for review and improvement. The impact of changes in processes and tools can be measured and refined throughout the implementation phase.
**Reducing Risk when Changing the Development Process.** Many teams experience considerable anxiety as they make the transition to an object-oriented approach to development. The transition to OO usually entails a number of changes in the way a software engineer works. There are new analysis and design models to apply, new notations to master, and new, occasionally eccentric, tools and compilers to learn. There is also valid concern about adopting a new method at the beginning of the development process. Few teams are willing to make a full commitment to a new method when they have little experience with it. There may even be organizational changes anticipated if the organization is looking for large-scale productivity gains through formalized reuse.
Development teams and managers want some way to manage the risks associated with making so many simultaneous changes to their development environment. EVO can help manage the risks. The repeating cycles during the implementation phase provide for continual review and refinement of each parameter of the development environment. Any aspect of the development environment can be dropped, modified, or strengthened to provide the maximum benefit to the team.
Evolutionary Fusion
Fusion and Evolutionary Development are complementary. One of the primary assumptions of EVO is that one can decompose the functionality of a project into small manageable chunks. It is also expected that these chunks will provide some measurable value to the intended user and can thus be given to the user for feedback. Fusion provides the method of decomposition. At the highest level, Fusion decomposes the functionality of a system into use scenarios. Use scenarios are defined from the perspective of a user or agent of the system and are expected to capture a use of the system that provides some value to the agent.
EVO also presupposes that an architecture capable of accommodating all the expected functionality of the system can be defined prior to implementation. This architecture must be flexible enough to accommodate new or redefined functionality resulting from customer feedback. Fusion helps create this flexible architecture. The object model provides an architecture that encapsulates common functionality into classes and provides flexibility and extensibility through generalization and specialization. Fusion also accommodates large-scale change through the well-defined linkages between models. If necessary, changes to functionality can be rolled all the way up to the use scenarios and then cascaded back down through the appropriate analysis and design models, replacing guesswork in assessing the impact of a change with a more systematic approach.
Evolutionary Fusion divides a project into two major phases: the definition phase and the development phase (Fig. 4). During the definition phase, a project's functionality is specified and its viability as a product or system is first estimated. The Fusion analysis models play a key role in this phase. The use scenarios serve to remodel the specification document, checking it for clarity and completeness. They can also be reviewed with customers to validate the development team's understanding of customer needs. The object model captures the initial architecture for the system and provides additional checks of the specification. The data dictionary captures the team's emerging common vocabulary and understanding of the problem domain. The operation model, through its system operation descriptions, gives an indication of the size and complexity of the project. This information is critical for estimating resource needs and developing the initial plan for the development phase.
The second phase is the development phase, in which code is incrementally designed, implemented, and tested to meet the specification. Each development cycle follows the same pattern. First, the analysis models are reviewed for completeness with respect to the functionality to be implemented during that cycle. Next, the Fusion design models are created or updated to support the functionality. And finally, the code is written and regression tests executed against the code. In parallel with the development activities of the team, selected users or customers of the system are working with and providing feedback on the release from the previous cycle. This feedback is used to adjust the plan for the following cycles. To complete the development phase, a final round of integration and system testing is done. The next two sections discuss these two phases in more detail.
Definition Phase
The definition phase is best characterized as a period of significant communication and thought. Communication must occur between all members of the project team to make sure that everyone shares a common understanding of the project's goals. Thought must be put into the specification document to make sure that it is complete and unambiguous and that it meets the requirements. Communication must occur between the development team and the intended users of the system to ensure that the product meets the needs of the customers.
ensure that the system, at least as it can be specified on paper during this early stage of the project, will meet their needs. Thought must go into defining an architecture capable of supporting the intended functionality of the full system. The goal is to identify and resolve as many issues as possible during this phase. Specification errors that are not resolved during this phase can be extremely costly to repair later.
Our experience has shown that the Fusion analysis models are ideal for stimulating the thought and supporting the communication that must occur during the definition phase.
**Analysis Models—First Pass**
Like Fusion, Evolutionary Fusion requires some form of system specification as a starting point, and just about any level of detail in the system specification will do. When the specification is at a high level, the analysis models serve to identify large numbers of issues and questions that need to be resolved before development can begin. When the specification is at a more detailed level, the analysis models serve to remodel and recapture high-level structure and functionality that may be lost in the detail. We have yet to define what level of detail in the system specification yields the most efficient definition phase for Evolutionary Fusion. Regardless of the level of specification detail, the analysis models provide the beginning of a common vocabulary and understanding of the problem domain that will serve the team well throughout the project.
The most critical component of the system specification is the value proposition. The value proposition clearly articulates why the intended customer of the system will choose to use it over the other options available. The functionality defined in the specification is the development team’s initial best estimate as to how to deliver that value proposition. There are usually countless other ways to deliver it. The explicit customer feedback loop of Evolutionary Fusion will validate the best estimate over time and will suggest better ways to deliver the value proposition. The value proposition itself should remain constant throughout the entire development process. If the value proposition changes during the development phase, it will be quite difficult for the team to make all the modifications necessary to implement a new one and still end up with a coherent set of product features.
**Use Scenarios**
The first analysis model to be created is the set of use scenarios. To provide some structure for this activity, it is useful to first generate a list of all the agents that exist in the system’s environment. It can often be a challenge to decide what constitutes an agent. For example, the file system provided by the operating system is clearly part of any system’s environment. It can be expected to provide services to and make demands on the system being defined. Representing the file system as an agent does not add any additional clarity to the team’s understanding of the system under definition. However,
representing specific files as agents, such as configuration files, legacy databases, or data input files, does add clarity. In one project, it was useful to model, as an agent, a critical data input file generated externally to the system. A general rule of thumb is that an agent must add to the understanding of the system if it is to be included at this early stage.
Once the list of agents is complete, each agent can be examined with respect to the demands it will make on the system. These demands are captured as use scenarios. As with defining agents, determining an appropriate level of granularity for the use scenarios can be a challenge. Another rule of thumb is that use scenarios should provide complete chunks of value from the perspective of the agent. In the project mentioned above, the system was modeled as providing value to the input file by accepting records of data from the file and translating those records into a format that could be used by the rest of the system. This approach will help avoid the issue of trying to keep all use scenarios at the same level of granularity. It is the agent that defines the appropriate level of granularity, not the system as a whole.
Once the use scenarios have been specified, each is diagrammed to decompose it further into discrete system operations and events. It is also useful to annotate in the margins of the use scenario diagram any time constraints that may exist (see Fig. 5). For systems of reasonable size, it is difficult to define a correct set of use scenarios on the first try. Building the use scenarios is itself an iterative process of refinement.
### Object Model
As Ould states in his text on software engineering strategies,
> "The success of the incremental delivery approach rests on the ability of the designer to create—from the start—an architecture that can support the full functionality of the system so that there is not a point during the sequence of deliveries where the addition of the next increment of functionality requires a massive re-engineering of the system at the architectural level (p. 59)."
The Fusion object model, the next analysis model to be created, serves as that architecture.
Once the use scenarios are complete, the development team has a much clearer understanding of the demands that will be placed on the system. The use scenarios are an excellent source of information for building the object model. The use scenario diagrams can be stepped through, making sure that analysis classes exist to support the need of each system operation. It is also quite common that building the object model will generate further refinements and improvements to the use scenarios.
### Operation Model
The last analysis model to be created during the definition phase is the operation model. It documents in a declarative fashion the change in the state of the system as it responds to a system operation. Each system operation is described using only terms from the use scenarios, object model, and data dictionary.
A complete specification of the system exists when the operation model is completed. The use scenarios capture the intended uses of the system from the agents' point of view. The object model captures the high-level architecture of the system. The operation model documents the effect that each system operation has on the system. The creation of each model has stimulated the thought necessary to identify and resolve issues, while the notation for each model establishes a common communication format for the team.
### Managing the Analysis Process
An appropriate question to ask at this point is how much time should be invested in making a first pass at the analysis models. Although there is no formula that we can offer for Evolutionary Fusion, the application of a progress measurement technique used by many development teams during implementation works surprisingly well at this early stage of development. During the integration and system test phase, many teams compare the rate at which defects are being identified to the rate at which defects are being isolated and repaired. In the early part of this phase, the rate of defect identification exceeds the rate of defect repair. At some later point in this phase, the rate of repair exceeds the rate of
identification, and estimates can be made on when the desired defect density will be reached and the product can be released.
A similar approach can be used to track progress during the creation of the analysis models in Evolutionary Fusion’s definition phase. Any issue identified during the creation of the analysis models can be considered a potential defect in the specification of the system. As with testing code, the initial attempts to build the analysis models will generate a large number of potential issues, or defects. As the creation of the analysis models progresses, fewer and fewer issues, or defects, will be found. Once the rate of resolving, or repairing, these issues exceeds the rate of finding new issues, a completion date for the first pass at the analysis models can be estimated.
An additional parameter often assigned to defects is a classification that represents the severity of the defect. Few systems are shipped with known defects that can cause unrecoverable data loss, but many are shipped with known defects that have only limited impact on the system’s use. It can be helpful to apply a similar classification scheme to the issues found during analysis.\textsuperscript{10} Many issues identified will be of such impact that they must be resolved before moving on to the development phase. Other issues will be of lesser impact and, as such, resolution can be delayed until the development phase. There is also a third class of issues that relates directly to design or implementation. These must be reclassified as design or implementation issues and marked for resolution during that phase.
There is an expectation that a team must complete all the analysis phase models before moving on to implementation. Our experience has shown that this is not the case. It is only necessary to complete a high-level view of the complete system and to resolve the critical and serious “defects” that have been logged against the analysis models. This approach can also help teams avoid “analysis paralysis,” the malady that afflicts many teams when they try to resolve every known issue before moving on to design and implementation. The analysis models will be revisited as the first step of each implementation cycle, so further additions and refinements can be made then.
It is difficult to accurately estimate the length of the analysis phase, especially if it is the team’s first use of object technology. Fortunately, using the approach described here can provide early indication of progress so that resources can be managed accordingly.
**Building the Plan**
The last task of the Evolutionary Fusion definition phase is to plan the next phase, development. This task consists of three major steps: assigning ownership for the key roles that must be played during this phase, defining the standard EVO cycle, and determining the sequence in which functionality will be developed.\textsuperscript{11}
**Key Roles.** For the development phase to progress in a smooth and efficient manner, it is helpful to define and assign ownership for three key roles: project manager, technical lead, and user liaison. On large project teams, these roles may be shared by more than one person. On smaller project teams, a person may play more than one role.
Project manager: Many aspects of the project manager’s role become even more critical with Evolutionary Development. The project manager must work with the marketing team and the customers to establish the project’s value proposition, identify key project risks, document all commitments and dependencies, and articulate how Evolutionary Development will contribute to the project’s success. Agreement on the value proposition is critical, as it will help keep the decision-making process focused. The key project risks will be used to sequence the implementation so that these risks can be characterized and addressed as early as possible. The commitments and dependencies will also be a key consideration when sequencing the implementation cycles. It is also important that the project manager solicit and address any concerns that the project team has with the Evolutionary Development approach.
The project manager must also define and manage the decision-making process. Although this is often an implicit task of the project manager, the large amount of information and the increased number of decisions that must be made using Evolutionary Fusion require that this process be made explicit. Based on the kinds of changes anticipated during the project, the project manager must consider how information will be gathered, how decisions will be made, and how decisions will be communicated. With very short development cycles, delayed decisions can slow progress dramatically.
Working with the technical lead, the project manager may also decide to include explicit design cycles in the schedule. For software architectures and designs that are expected to survive many years, supporting multiple releases or even multiple product lines, it is important to invest in the evolution of the architecture. As the development phase progresses, certain isolated decisions that compromise some aspect of the architecture will be made. There will also be new insights into the architecture and its robustness that could not have been anticipated during the definition phase. Design cycles dedicated to the architecture will deliver no new functionality for the user. By including tasks such as architecture refinement, design development, and design inspections, these cycles will deliver to future EVO cycles an architecture that is better equipped to meet the demands that will be placed on it.
Technical lead: The technical lead is responsible for managing the architecture of the project as well as tracking and helping to resolve technical issues and dependencies that arise between engineers and between subsystems. The technical lead also plays a key part in defining the detailed task plans for each implementation cycle. With a broad view of the system, the technical lead can make sure that tasks scheduled for an implementation cycle are feasible and that they all contribute to the stated deliverable for the cycle.
**Fig. 6.** Sample two-week EVO cycle.
<table>
<thead>
<tr>
<th>Monday</th>
<th>Tuesday</th>
<th>Wednesday</th>
<th>Thursday</th>
<th>Friday</th>
</tr>
</thead>
<tbody>
<tr>
<td>Final Test of Last Week’s Build</td>
<td>Release Last Week’s Build to Users</td>
<td>Create Design Models for New Features</td>
<td>Incremental Build</td>
<td>Weekend Build from Scratch</td>
</tr>
<tr>
<td>Review and Enhance Analysis Models for New Features</td>
<td>Begin Implementation of New Features</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Fig. 6.** Sample two-week EVO cycle.
<table>
<thead>
<tr>
<th>Monday</th>
<th>Tuesday</th>
<th>Wednesday</th>
<th>Thursday</th>
<th>Friday</th>
</tr>
</thead>
<tbody>
<tr>
<td>All User Feedback Collected</td>
<td>Functionality Freeze—No New Features Added Beyond this Point</td>
<td>Incremental Build</td>
<td>Test New Functionality</td>
<td>Test New Functionality</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Incremental Build Overnight</td>
<td>Review Feedback, Determine Changes for Next Release</td>
<td>Weekend Build from Scratch</td>
</tr>
</tbody>
</table>
User liaison: The user liaison manages the team’s interaction with the users, including setting up the user feedback process by defining expectations of the users, locating and qualifying users against these expectations, and coordinating any initial training that the users will need on the system. Once the development phase is underway, the user liaison will be responsible for collecting feedback, tracking user participation and satisfaction with the process, and ensuring that users are kept informed of the development team’s response to their feedback.
It is important to keep in mind that the users providing feedback on the system may change over time. In the early development phase, it may be unrealistic to deliver the system to actual users, since there may simply not be enough functionality in the system. For these releases, other members of the project team or other members of the organization can act as surrogates for actual users.
**Defining the Standard EVO Cycle.** The next step in planning the development phase is to define the standard EVO cycle to be used. This task includes establishing the length of the cycle as well as the milestones within the cycle. The general rule of thumb is to keep the cycle length as short as possible. Within Hewlett-Packard, projects have used a cycle length as short as one week and as long as four weeks. The typical cycle time is two weeks (see Fig. 6). The primary factor in determining the cycle length is how often management wants insight into the project’s progress and how often they want the opportunity to adjust the project plan, product, and process. Since it is more likely that a team will lengthen their cycle time than shorten it, it is best to start with as short a cycle as possible.
**Grouping and Prioritizing Functionality.** With key roles assigned and the standard cycle defined, the last step in planning the development phase is to group and prioritize the functionality into implementation chunks. The chunks must be no larger than can be delivered in the standard cycle time. Prioritization ensures that critical or high-risk features are completed early and that low-risk features are delivered last. Some of the most common criteria used for grouping and prioritizing functionality will be discussed later in this section.
The deliverable from the planning phase is an implementation schedule that maps all functionality for the system into implementation cycles and provides enough detail for the first three or four cycles so that actual implementation can begin. To help develop this schedule and to maintain a user perspective, the Fusion use scenarios and system operations provide a useful grouping of system functionality. System operations, which may appear in multiple use scenarios, are grouped together to define use scenarios.
The first step is to divide the system development into four or five major chunks and to group those use scenarios that include top-priority functionality into the first chunk (Fig. 7). The rest of the use scenarios can then be grouped into the following major chunks, with the use scenarios containing the lowest priority functionality in the last chunk. At this stage each chunk should contain approximately the same number of use scenarios.
The next step is to order the use scenarios within the first chunk using the same criteria as before (Fig. 8). When producing this ordering, it is not uncommon to move scenarios between groups to achieve a better balance and sequence. Since system operations may appear in multiple use scenarios, many of the system operations that are contained in the use scenarios of later groupings will be implemented with use scenarios in earlier groupings. Therefore, it is best to have the fewest use scenarios in the first chunk and the most in the last chunk.
The system operations from the use scenarios in the first group can now be grouped and sequenced into the first few implementation cycles (Fig. 9). Keep in mind that the deliverables from each cycle should be defined in such a way that they can be validated by a user of the system. For these early cycles, the limited functionality may be best validated by another member of the development team. The key concept is that you must be able to validate the success of the cycle in some way.
When estimating the number of system operations that the development team can implement in a cycle, experience has shown that taking the common wisdom of the team and dividing that number in half yields the best results. Because this approach to development may be new to the team, it is extremely important from a motivational perspective that these first few implementation cycles be successful. Also, keep in mind that there is a fair amount of infrastructure developed and put in place during these first few implementation cycles as well. The tools and the process will undergo significant refinement during these first few cycles. For these reasons, keep the functionality content of the first few implementation cycles to a minimum.
A technique used widely within Hewlett-Packard is to adopt a naming scheme for the implementation cycles. One team used the names of wineries from their local Northern California region. As they completed each cycle, their project manager would buy a bottle of wine from that winery and store it away. Once several cycles were completed, the team would celebrate by taking the wine to a fine restaurant for lunch.
The final step is to estimate the number of cycles needed for the rest of the intended functionality and to project a final implementation completion date (Fig. 10). This is accomplished by counting the new system operations that must be implemented in the rest of the chunks and dividing by the number of system operations that can be completed in each cycle to give the total number of implementation cycles. In the example used to illustrate the planning process, the estimated length of the implementation phase is 32 weeks. To facilitate communication, it is useful to assign themes to each of the implementation chunks. The project team and the users will need both a detailed and a high-level view of the project, but...
there are typically many members of the organization that prefer to see just the “big picture.” The themes can help convey that big picture.
With the deliverables now defined for the first several EVO cycles, the technical lead can prepare the detailed task list for these cycles. This detailed task list should include a clear description of the task, an owner for the task, and any dependencies that the task may have on other tasks within the cycle.
It is not necessary to provide any additional detail for the groupings of use scenarios beyond the first. It is only necessary to make sure that all functionality as it is defined at this early stage is accounted for and that an overall estimate of the effort is calculated. It is expected that experiences from the first few implementation cycles will affect future cycles in many ways. These later implementation cycles will be defined in more detail several cycles before their start date. On small projects with one or two collocated teams, detailing the next three or four implementation cycles is adequate. On larger projects, it may be necessary to maintain detailed schedules that reach further out in time.
**Fig. 9.** First implementation cycles defined.
Some of the criteria commonly used in setting priorities during this initial planning activity are the following:
- **Features with greatest risk.** The most common criterion used for prioritizing the development phase implementation cycles is risk. When adopting object technology, many teams are concerned that the system performance will...
not be adequate. Ease-of-use is another common risk for a project. The use scenarios that will provide the best
insight into areas of greatest risk should be scheduled for implementation as early as possible.
- Coordination with other teams. Most software development teams today have commitments to or are dependent
on other teams. For example, firmware development depends on some form of hardware development. Reusable
software platforms make a strong commitment to the products that are built on them. It may be necessary to adjust
the priority assigned to functionality to accommodate these dependencies and commitments.
- "Must have" versus "want" functionality. All product features are not created equal. Some features are considered
critical to the success of a project, while some features would simply be nice to have. Some development projects
must meet well-defined standards and may even have to pass certification tests of their functionality that are de-
dined by governing regulatory agencies. On these projects, it is often best to complete the required or "must have"
functionality before the value-added or "want" functionality. Those use scenarios that capture the required func-
tionality should be given higher priority than those that capture only desired functionality.
This same criterion can also apply to core or fundamental functionality that must be in place before additional functionality
can be implemented. It may be necessary to build up in a layered fashion the core functionality that all other functionality
will depend on. It is imperative that each cycle contributing to the core functionality be defined so that some validation or
feedback can be obtained.
- Most popular or most useful features first. If project risks are minor and if project commitments and dependencies
are insignificant, then prioritization of use scenarios can be based on value to the intended user. Those use scenar-
ios that are the most popular or will be of the most value to the user should be completed first.
- Infrastructure development: A significant amount of development environment infrastructure must be put in
place during the first few implementation cycles. The tools that will be used, such as the compiler, debugger, and
software asset configuration manager, as well as the processes that are adopted, can be developed in an evolution-
ary fashion in parallel with the functionality intended for the user. Some teams have found it valuable to make the
infrastructure tasks an explicit category in the plan for each implementation cycle.
Development Phase
With both the development phase plan and the detailed plans for the first few EVO cycles in place, the implementation
process can begin. Each EVO cycle consists of the same basic steps: refining the analysis models, developing the design
models, and writing and validating the code. The customer feedback process is executed in parallel with these tasks. The
deliverables from the previous EVO cycle are evaluated by selected users or their surrogates, and decisions are made that
shape the content of the subsequent EVO cycles.
Refining the Analysis Models. The EVO cycle begins with a review of the existing Fusion analysis models against the
functionality or system operations defined as deliverables for that cycle. For each cycle, new functionality may be defined
for delivery and existing functionality may be identified for modification.
The process for moving through the Fusion analysis models remains the same. Use scenarios that include the system
operations must be reviewed for changes that were the result of feedback and refinement from previous EVO cycles. The
object model must be reviewed for similar changes. Additional detail may be required in the object model. The system
operation descriptions are reviewed for any changes and to ensure a common understanding by all members of the team.
The technical lead is a key player during the refinement of the analysis models. Because they represent the overall
architecture for the system, any extensions or enhancements of the models must be made without serious compromise to the
integrity of the architecture. If compromises must be made, they should be logged as defects against the architecture and
considered for possible repair in a later EVO cycle.
Design Models. Based on the clear understanding of the deliverables for the cycle generated by the review and refinement of
the analysis models, the Fusion design models can be created or updated. Object interaction graphs will determine the new
classes that will be needed or the new methods that will be added to existing classes. The Fusion design models determine
what coding must be done for the cycle.
Coding and Validation. In addition to the code that must be generated to implement the design models, any tests needed to
validate this work in later cycles must also be completed. Many teams make use of test harnesses to validate their code
during the early cycles of development. These test harnesses are software modules or subsystems that can exercise the
method interfaces of other software subsystems. They are particularly useful during the early cycles of development when
major portions of the architecture have not been implemented. They also provide great value in later EVO cycles as tools for
focused and automated regression testing.
Customer Feedback. The customer feedback loop operates simultaneously with the implementation tasks. Beginning with the
second cycle and continuing throughout the development phase, some group of users or surrogate users will be validating
the product that the team has completed so far. The feedback that they provide must be evaluated against the value
proposition of the project for appropriate decision making. It is important that the project manager, technical lead, and user
liaison allocate enough time during each cycle to review plans, processes, and architectural documents to assess the impact
of each decision.
System Test Using Use Scenarios. Although the use scenarios can be helpful in conducting unit and integration testing for each implementation cycle, they can provide the greatest value during system test. Since the use scenarios are not structured along architectural or subsystem boundaries, they tend to provide a broad level of system testing that generates paths of execution through the entire system. They may be augmented to generate boundary and stress-test conditions, and they can also serve as a basis for creating user-level documentation.
Scaling up for Large Projects
In the use of Evolutionary Fusion with large projects, and especially with those that include multiple development teams that may not even be collocated, there are a number of additional issues to consider. It may not be appropriate to integrate the deliverables from all project teams every EVO cycle. It is useful to define a higher-level set of EVO cycles and to integrate all work together at the end of those cycles. To manage these multiple levels of EVO cycles, as well as the broad set of technologies that may be involved, it is also useful to employ multiple technical leads, or architects.
Hierarchical EVO Cycles. As the size of a project team grows, a larger and larger portion of the standard EVO cycle is dedicated to integrating the work of the many project team members. To keep the standard EVO cycle as small and as efficient as possible and to let project teams progress in parallel, it is necessary to introduce hierarchical EVO cycles. These hierarchical cycles are essentially a formalized version of the chunks of functionality or groupings of use scenarios introduced earlier, under “Grouping and Prioritizing Functionality.”
The four or five major chunks or groupings that the use scenarios are initially broken into become the highest-level EVO cycles. As before, the use scenarios for the first chunk or EVO cycle are sequenced and the system operations allocated between multiple teams (Fig. 11). For large teams, it is also useful to add an integration EVO cycle at the end of each major EVO cycle.
Each team is expected to define its own user feedback and validation process for its minor EVO cycles. There will also be a feedback and validation process for each major EVO cycle of the system.
Role of Architects. Since it is difficult to define subsets of functionality that are completely independent of one another, it is important to have an identified individual or group of individuals to manage the dependencies throughout each major EVO cycle. This role is best played by the technical leads of each team, the architects. The architects play a key role in allocating system operations among the various teams during each planning phase, and they are best positioned to resolve any technical issues that emerge as a result of the parallel implementation approach. For large projects within Hewlett-Packard, weekly meetings or conference calls are typical for the architect teams.
Conclusion
Much of Hewlett-Packard’s success is attributable to the fact that it is a diverse company composed of many independent organizations. However, relatively few software development best practices have achieved widespread adoption in this environment of autonomy and diversity. Fusion appears to be an exception to this rule. Fusion’s appeal is largely a result of the respect that its creators have for software development teams. Fusion does not attempt to address every possible nuance of software development with complex notations and model variations. It does provide a reasonably simple, complete set of models that supports a team through most of the development process, acknowledging that software development is a complex and challenging endeavor.
engineers are highly educated and talented professionals and that they are best suited to adapt a method to meet their unique project needs and working styles.
Evolutionary Development has been positioned here as a life cycle for software development, but it really has much broader application to any complex system. Fusion, the method, is changing to better meet user needs using an evolutionary approach. Based on user feedback, we merged Evolutionary Development with Fusion as the deliverable from one evolutionary cycle. There have been a number of other changes to the method, as well as to the method of delivery, again all based on user feedback. As our experience with Fusion grows, so will the method. It is our hope that the Fusion user community will continue to share experiences and to evolve the method in a direction that is both respectful and useful to all software development teams. See the sidebar: Fusion in the Real World for a brief synopsis of the book.
Acknowledgments
It is impossible to thank all those that have contributed in some way to the material covered in this paper, but I must try. First, I would like to thank the many Hewlett-Packard development teams that I have had the privilege to work with. Their unwavering dedication to creating innovative products and to adopting innovative ways of working make Hewlett-Packard a very successful company and an extremely rewarding place to work. Next, I would like to thank my colleagues of the Software Initiative who have worked with me to make Fusion and object technology as easy to learn, adopt, and adapt as possible. I would like to offer a very special thanks to the reviewers of this material, Ruth Malan, Reed Letsinger, Elaine May, and Tom Gilb. Their wealth of knowledge and experience generated insights and suggestions that have added significantly to the clarity and presentation of this material. And finally, to Derek Coleman and his team, for providing us all with the very powerful and useful set of models and notation that we call fusion.
References
11. E. May and T. Cotton, Evolutionary Planning Workshop, Hewlett-Packard internal publication.
|
{"Source-Url": "http://concepts.gilb.com/dl35", "len_cl100k_base": 10173, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 36578, "total-output-tokens": 11029, "length": "2e13", "weborganizer": {"__label__adult": 0.0003714561462402344, "__label__art_design": 0.0003108978271484375, "__label__crime_law": 0.00024080276489257812, "__label__education_jobs": 0.0009860992431640625, "__label__entertainment": 4.220008850097656e-05, "__label__fashion_beauty": 0.0001456737518310547, "__label__finance_business": 0.00033783912658691406, "__label__food_dining": 0.0003063678741455078, "__label__games": 0.00041294097900390625, "__label__hardware": 0.0005397796630859375, "__label__health": 0.0003254413604736328, "__label__history": 0.00021529197692871096, "__label__home_hobbies": 7.88569450378418e-05, "__label__industrial": 0.0002808570861816406, "__label__literature": 0.0002290010452270508, "__label__politics": 0.00022721290588378904, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.0020542144775390625, "__label__social_life": 7.587671279907227e-05, "__label__software": 0.003612518310546875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00028228759765625, "__label__transportation": 0.00037384033203125, "__label__travel": 0.000194549560546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56828, 0.00527]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56828, 0.3209]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56828, 0.94682]], "google_gemma-3-12b-it_contains_pii": [[0, 4129, false], [4129, 7063, null], [7063, 13290, null], [13290, 16381, null], [16381, 20303, null], [20303, 23334, null], [23334, 27636, null], [27636, 33849, null], [33849, 39766, null], [39766, 42140, null], [42140, 43707, null], [43707, 49712, null], [49712, 53486, null], [53486, 56828, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4129, true], [4129, 7063, null], [7063, 13290, null], [13290, 16381, null], [16381, 20303, null], [20303, 23334, null], [23334, 27636, null], [27636, 33849, null], [33849, 39766, null], [39766, 42140, null], [42140, 43707, null], [43707, 49712, null], [49712, 53486, null], [53486, 56828, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56828, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56828, null]], "pdf_page_numbers": [[0, 4129, 1], [4129, 7063, 2], [7063, 13290, 3], [13290, 16381, 4], [16381, 20303, 5], [20303, 23334, 6], [23334, 27636, 7], [27636, 33849, 8], [33849, 39766, 9], [39766, 42140, 10], [42140, 43707, 11], [43707, 49712, 12], [49712, 53486, 13], [53486, 56828, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56828, 0.04813]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
aad4cc16f353137cfabc8d9c5a72109e15061679
|
There is simply no way around the fact that the performance of any real-time Web application is critical to the success or failure of the product. Most user communities today are very unforgiving of applications with substantial page response times. Time is a valuable commodity in today’s fast-paced Internet world, so performance is an essential aspect of user acceptance for any software product. Thus, it is critical that performance be considered from the beginning of the software development process. Now, there is a lot of common wisdom on this topic, particularly about the dangers of spending too much time up front on optimization. As in many things, the best answer is to take things in moderation and find a middle ground. Performance should be considered first at the architecture level and then at increasingly lower levels of detail as the iterative software development process continues. To begin, this chapter looks at the overall software development process and how performance engineering fits into the picture.
**Overall Performance Approach**
A basic development lifecycle with performance engineering integrated into the process is shown in Figure 10.1. Note that this process itself is often performed in an iterative manner that includes both prototypes and multiple production releases.
It is no surprise to see that the corrective measures after an unacceptable performance test get increasingly more expensive and detrimental as you are required to go farther back in the process. Thus, it is very important to spend some initial time considering performance during the establishment of the overall software architecture. It is much easier to refactor portions of the application code than it is to change the underlying software architecture. As was alluded to earlier, there should still be a balance in terms of how much time and effort is spent on this topic, but the following guidelines usually hold true:
- A scalable, efficient architecture is a must for high-performance applications.
- Lower-level optimizations can always be done later.
As this chapter looks at performance both in the overall software development process and in J2EE technology, more clarity will be brought to these two important points. The next section takes a look at each of the development steps in a bit more detail from the perspective of performance.
**Performance Engineering in the Development Process**
At the beginning of a software development effort, one of the first steps is to determine the high-level objectives and requirements of the project. In addition to identifying the key functionality provided by the system, the project objectives can include such things as the flexibility of the system or the overall performance requirements. During this time, the overall system architecture is also being developed. For performance-intensive applications and projects with demanding requirements, a scalable architecture is an absolute must. Early architecture reviews cannot ignore the performance aspect of the system.
A scalable, efficient architecture is essential for high-performance applications. Initial architecture reviews during the early stages of a project can be used to help benchmark and validate high-level performance requirements.
At this point, you are talking about the high-level software architecture including such things as component interaction, partitioning of functionality, and the use of frameworks and common design patterns. At this point you do not need to spend large amounts of effort or consideration on detailed optimizations such as the use of `StringBuffer` versus `String` or data caching down to the entity level. You are, however, still looking at high-level strategies such as the component implementation model, data caching strategies, and possibilities for asynchronous processing.
The creation of the basic software architecture at this point usually includes some kind of narrow but deep prototype, or proof-of-concept, which executes the communication through all of the layers and primary components of the architecture. This could include a user interface that retrieves data from the database and then sends an update all the way back through the architecture. Some basic load testing can occur at this point to obtain a ballpark estimate of transactions per second, page response time, or some other meaningful unit of measure that can help to frame the discussion on performance. This kind of data can be very helpful in terms of determining the validity of any high-level performance requirements that are being agreed upon during the project’s early stages.
Once the individual use cases or scenarios of the system move into the analysis step, specific performance requirements often emerge for different functions and business processes. The analysis step allows you to apply the high-level project objectives against the specific functional requirements in order to derive these lower-level performance requirements for particular functions or pages.
Using the combination of the project objectives, functional requirements, and any case-specific performance requirements, the process moves into the design phase. It is
important that performance be considered at this phase because in many cases, there
are trade-offs that must be made between competing objectives on both the business
and technical sides of the project. Planning for performance can sometimes require a
give and take between business requirements, such as the overall flexibility of the sys-
tem and technical constraints, such as adherence to pure object-oriented design tech-
niques. Thus, you cannot ignore performance as a consideration during the design
phase, yet at the same time, you should not let it drive every decision.
In the coding phase, the everyday coding best practices become a focal point that
lead directly into the resulting quality of the product. At this point, common design
patterns have been prototyped, optimized to the extent that they can be in limited-use
situations, and are being applied to the application functionality. It is the responsi-
bility of the development team to then follow any guidelines set forth, such as the afore-
mentioned use of StringBuffer when a large amount of string concatenation is
being done to avoid the creation of many small, temporary objects. These are the more
minor things that, if done simply out of habit, can all add up to a robust set of applica-
tion code and the best possible performance results. These types of things can also be
catched during code reviews and used as a way to validate and communicate best prac-
tices to a development team.
In iterative software development, performance tests are typically run after signifi-
cant intermediate iterations have been completed or before releases go into produc-
tion. Testing tools are often used to generate a target number of simulated clients, and
the results are measured, again resulting in a set of metrics such as average page
response time and transactions per second. If the results are not satisfactory and the
root causes are not immediately apparent, profiling tools can be used to determine
where the trouble spots are in the code.
**NOTE** If your project or organization is on a small budget, there is a nice load-
testing tool called OpenSTA available under a GPL (GNU General Public License)
license that can be found at http://www.opensta.org. This tool is fairly easy to set
up and use to run simulated load tests on Web applications. It may lack all of the
features available within some commercial packaged solutions, but it provides
almost all of the basic capabilities and reporting functions.
Even at the end of a development cycle, there are still many lower-level code opti-
mizations that can be done, for example, additional data caching or the use of more
efficient Java collections classes. However, major changes to the code involving the
component implementation and interaction models are difficult to make unless a modu-
lar architecture has already been put in place. Likewise, if the architecture itself is not
scalable or efficient for its purposes, you have an entire codebase that may be affected
by changes sitting on top of it. If a commonly used pattern in the application is re-
designed at this point, it likely has many incarnations across the codebase that need to
be changed. Alternatively, if you are talking about something like moving components
from Entity Beans to regular Java classes, the migration is much more difficult if you
do not have a service layer isolating the business logic from the presentation layer.
These types of changes can be costly at this point in the game. Similarly, changes to the
application design can have a significant effect. For example, you may have made
much of the business logic of the application configurable through database tables in order to meet a project objective of flexibility. A potential resulting effect of this, in terms of performance, is that the application becomes terribly slow due to the extensive database I/O throughout each transaction. A change to this aspect of the design, such as moving more of the logic back into the application code, could very easily affect the overall flexibility. Now, the role of architecture in this project is not only to provide an efficient foundation to implement these designs but also to allow for a mitigation plan. If you have wrapped access to the configuration data and isolated it to a set of objects, you may be able to cache the data in memory and easily speed up the performance of the application. You may also need to build in a refresh mechanism based on the requirements. In terms of implementing this type of change, it is much less painful to go back and recode a wrapper class than it is to update every business component that used the configuration data. In fact, the foundation logic for the business objects followed this same pattern through the use of the `MetadataManager` and `CacheList` components.
As a last resort, there may be a need to go back and review the specific performance requirements and possibly even validate that the project objectives are in line with what can realistically be done to provide the most value to the user community. To avoid having to go through this, the time and effort spent on performance can be spread a bit more evenly throughout the life of the project in order to mitigate, measure, and meet the performance requirements spelled out for your application.
**Measuring Performance**
Fully evaluating the performance capabilities and capacity of an application often requires the use of different metrics and different perspectives. Initially, it is usually best to put the focus at the transaction level and measure the individual transaction time or page response time. As the development process continues, the focus expands to include measurements of the transaction throughput, the ability to support concurrent users, and the overall scalability of the application. One of the main challenges in terms of performance in application development is to try to balance these vantage points.
**Individual Transaction Response Time**
During the early prototyping stages, the first question to ask is, “How fast can I push a single transaction through the system?” This is easy to test, requiring only a simple client program with a timer, yet it provides the basic unit of measure upon which the vast majority of performance metrics will be based. The result of a load test or sizing exercise is usually a multiple of the processing speed of each individual unit of work. Thus, the first area of focus is the basic patterns and architecture components exercised by some basic transactions. If you create efficient patterns going through the core of the user interface and business component models, these basic transactions can be optimized and used as a foundation for the application. Keep in mind, however, that your work does not end here because the next perspective may impact some of the strategies chosen during this first exercise.
Transaction Throughput and Scalability
The second aspect of performance that you want to measure takes the area of focus up a level to the behavior of the application operating under a heavy user load. Scalability is one of the main concerns here that can potentially impact some of the optimizations you want to perform at the individual transaction level. The J2EE component architecture provides a foundation for highly scalable and available applications on which to base your approach. However, there are a couple of things to keep in mind, primarily the memory consumption of the application and the size of the user HttpSession object. As an example, you may have a blazing fast page response time for a single user, but that may have been enabled by storing an entire result set from the database in the HttpSession. Subsequent page requests can then page through the data without having to go back to the database. If you are in this situation with a large data set, however, you may be able to get only a handful of concurrent users on an individual box because of the memory footprint involved with the application components.
As you look at the transaction throughput with various concurrent user levels, you also want to ask the question, “Does the system performance degrade as I add concurrent users and transactions?” You hope not, as you would like to see a linear response time as you add concurrent users to an application. Once you have hit the maximum number of users by pushing the current hardware to its limit, you would then like to see a linear response time as you add additional hardware. This type of scalability is made possible through the clustering and load balancing of the application components on the Web and EJB tiers. It enables you to add additional hardware and create redundant instances of the application server to meet the demands of your application. The value of the EJB component model is that it provides a standard method of building components to plug into a container and automatically take advantage of these infrastructure services.
Object Instantiation, Garbage Collection, and Scalability
In the Java language, there is also another aspect of code running in a JVM that affects the ideal of linear response time. There are actually two performance hits incurred by the JVM, both associated with instantiating an object in Java:
1. The initial cost of allocating memory to create the object
2. The secondary cost of tracking the object and later running it through the garbage collection (GC) process, potentially multiple times, until it is eventually destroyed and the memory is freed up for other use
Every object that is created in your code must later be checked by the JVM to see if it is being used by another object. This must be done before it can be freed and the memory reallocated for other use. The more objects that are created, the longer this garbage collection process takes, and the less free memory that is available, which then leads to the garbage collection process running more often. You can easily see how this can create a downward spiral that quickly degrades both the transaction throughput and the individual response times.
To quickly see the effects of garbage collection, use the `-verbose:gc` JVM flag. This causes the JVM to write information to the output log showing the time spent in GC, memory used, and memory freed each time GC is run.
The problem of the downward spiral is magnified if only one JVM is being used because transactions can continue to become backlogged until they eventually start to time out or reach completely unacceptable levels. Figure 10.2 shows a graph to represent the effects of garbage collection on response time for a single JVM under a heavy transaction load.
The secondary cost of object instantiation also prevents you from simply applying the tempting cure of adding more memory to the heap. With a larger heap size, the garbage collection process can become even more cumbersome to manage and then takes away valuable computing cycles that could be used for processing user transactions. Thus, adding more memory works to an extent, but at some point, it may have a marginally negative effect. Once again, the clustering and load-balancing capabilities of the J2EE application server come to the rescue to provide the scalability you need to help maintain a relatively even response time. Because requests are distributed across a cluster of application server instances, you can typically avoid having to use the JVMs that are garbage collecting to process the current transaction. The load-balancing algorithm, of course, is usually not tied directly into the GC status of the JVM, but it does use the law of averages and probabilities to work in your favor. What the clustering also allows you to do is to use a moderately sized memory heap for each JVM instance so that you can find the optimal setting for your application. Tuning this JVM parameter can often have a meaningful affect on the overall performance of an application. Usually it takes a number of trial and error load tests in order to determine the optimal settings for the heap size, although a few general guidelines include setting the minimum size to be half of the maximum size, which usually does not exceed 512 MB.
net result of all of this is a much more even response time and consistent transaction throughput as concurrent user levels increase. Figure 10.3 shows what an improved response time might be for an application clustered across multiple JVMs. Barring other extraneous factors, some minor blips in the curve still appear due to the occasional time periods when a number of the JVMs happen to be collecting garbage at the same time. This is largely unavoidable, but it has a much smaller effect on the overall response curve than in the scenario with a single JVM.
**ECperf—An EJB Performance Benchmark**
Another performance metric you can use is the ECperf benchmark created through the Java Community Process that is now a part of the J2EE suite of technologies. Its goal is to provide a standard benchmark for the scalability and performance of J2EE application servers and, in particular, the Enterprise JavaBean aspect that serves as the foundation for middle-tier business logic. The focus of the ECperf specification is not the presentation layer or database performance; these aspects are covered by other measures such as the series of TPC benchmarks. The focus of the ECperf tests is to test all aspects of the EJB component architecture including:
- Distributed components and transactions
- High availability and scalability
- Object persistence
- Security and role-based authentication
- Messaging, asynchronous processing, and legacy application integration
The software used for the test is intended to be a nontrivial, real-world example that executes both internal and external business processes, yet it has an understandable...
A workflow that can be consistently executed in a reasonable amount of time. Four business domains are modeled in the ECperf 1.1 specification as part of a worldwide business case for the tests:
- Manufacturing
- Supplier
- Customer
- Corporate
A number of transactions are defined for each of the domains, each of which is given a method signature to be used by an EJB component in the test. These transactions include such things as ScheduleWorkOrder and CreateLargeOrder in the manufacturing domain, as well as NewOrder and GetOrderStatus in the customer domain. Subsequently, two applications are built using these domains. The first is an OrderEntry Application that acts on behalf of customers who enter orders, makes changes to them, and can check on their status. The second is a Manufacturing Application that manages work orders and production output. The throughput benchmarks are then determined by the activity of these two applications on the system being tested. Reference beans are given for the test, and Entity Beans can be run using either BMP or CMP. The only code changes allowed are for porting BMP code according to regulations set forth in the specification. Deployment descriptors for all of the beans must be used as they are given in order to standardize the transactional behavior as well as the rest of the deployment settings. The reference implementation of these transactions uses stateless and stateful Session Beans as a front to Entity Beans, although the ratio of components is fairly heavily weighted toward Entity Beans.
The primary metric used to capture the result is defined using the term BBops/min, which is the standard for benchmark business operations per minute. This definition includes the number of customer domain transactions plus the number of workorders completed in the manufacturing domain over the given time intervals. This metric must be expressed within either a standard or distributed deployment. In the standard, or centralized deployment, the same application server deployment can be used for all of the domains and can talk to a single database instance containing all of the tables. The distributed version requires separate deployments and different database instances. These two measurements are thus reported as BBops/min@std or BBops/min@Dist, respectively. For either of these measurements, there is a very helpful aspect built into the specification for technology decision makers, the measure of performance against price, that is, $ per BBops/min@std, also commonly referred to as Price/BBops.
The ECperf 1.1 specification also announced that it will be repackaged as SPECjAppServer2001 and reported by the Standard Performance Evaluation Corporation (http://www.spec.org). SPECjAppServer2001 will cover J2EE 1.2 application servers while SPECjAppServer2002 will cover J2EE 1.3 application servers. A good “apples-to-apples” comparison of application servers like this has been a long time coming. The Sun Web site currently refers you to http://ecperf.theserverside.com/ecperf/ for published results. To give you a ballpark idea, there are currently a couple posted results over 16,000 BBops/min@std for under $20/BBops.
Performance in J2EE Applications
This section takes a look at various techniques you can use to optimize the architecture, design, and code within your J2EE applications. As a first step, there are key aspects within all Java programs that need to be addressed for their potential impact on application performance. Additionally, there are various performance characteristics associated with J2EE components and technologies that are worth noting. Many solutions involve using enterprise Java services whenever they provide the most benefit, but not as a standard across the board. Using the enterprise components across the board from front to back in the software architecture is a common tendency in building J2EE architectures. A key example of this is the use of Entity Beans. Relatively speaking, Entity Beans are fairly heavyweight components, and thus should not be used to model every business object in an application, particularly if each Entity Bean maps to a row in the database. Doing this can quickly degrade the scalability, and thus the usability, of an application. This goes back to one of the main points, that a scalable architecture is a must for almost any system, and design guidelines must be applied when deciding on the foundation for software components as well as in building the individual components themselves.
Core Aspects of Java Application Performance
Two significant performance aspects to consider for almost all applications are:
- Object instantiation and garbage collection
- Disk and database I/O
Object Instantiation
A key point to take away from the earlier discussion regarding object instantiation and garbage collection is that, to some degree, objects should be instantiated wisely. Each new version of the JVM has seen significant gains in the efficiency of the garbage collection process, but if you can reasonably limit or delay the creation of objects, you can help yourself greatly in terms of performance. This is especially true for larger components that usually encompass the instantiation of many objects. Of course, this does not mean you should go back to doing pure procedural software development and put all of your logic in a single main method. This is where performance as a design consideration comes into play. You don’t want to sacrifice the potential for reusability and flexibility through solid object-oriented design; thus, you don’t let performance drive all of your decisions. Nonetheless, keep it in the back of your mind. And if you aren’t quite sure of a potential impact, you can use an abstraction or design pattern to mitigate the concern by providing an escape route to take later. This means that if you have isolated an architecture layer or encapsulated a certain function, it can be changed in one place without great cost or side effects to the remainder of the application.
To maximize the efficiency of time spent considering performance in the design process, consider the following approach. Rather than look at every object in the entire object model, perhaps spend some time concentrating on the two extremes in your implementation: the extremely large objects and components and the extremely small objects. For obvious reasons, large objects and components rapidly increase the memory footprint and can affect the scalability of an application. In the case of larger components, they often spawn the creation of many smaller objects as well. Consider now the case of the very small object, such as the intermediate strings created by the following line of code:
```java
String result = value1 + value2 + value3 + value4;
```
This is a commonly referenced example in which, because String objects are immutable, you find out that `value1` and `value2` are concatenated to form an intermediate String object, which is then concatenated to `value3`, and so on until the final String result is created. Even if these strings are only a few characters in size, consider now that each of these small String objects has a relatively equal impact on your secondary cost consideration, the object tracking and garbage collection process. An object is still an object, no matter what the size, and the JVM needs to track all of the other objects that reference this one before it can be freed and taken off of the garbage collection list. Thus, all of those little objects, although they do not significantly impact the memory footprint, have an equal effect on slowing down the garbage collection process as it runs periodically throughout the life of the application. For this reason, you want to look at places in the application where lots of small objects are created in order to see if there are other options that can be considered.
In the study of business object components, the concept of lazy instantiation, which delays the creation of an aggregated object until it is requested, was discussed. If strict encapsulation is used where even private methods used a standard `get<Object>` method, you can delay the instantiation of the object until it is truly necessary. This concept is particularly important for value objects or other objects used as data transport across a network. This practice minimizes the amount of RMI serialization overhead as well as reducing network traffic.
**BEST PRACTICE** Use lazy instantiation to delay object creation until necessary. Pay particular attention to objects that are serialized and sent over RMI.
Another common use of this concept can be put into practice when lists of objects are used. In many application transactions, a query is executed and a subset of the resulting objects is dealt with in a transactional manner. This concept is particularly important if the business object components are implemented as Entity Beans. For a collection of size \( n \), as was discussed in the Business Object Architecture chapter, the potential exists for the \((n + 1)\) Entity Bean finder problem, which results in additional database lookups that can be accomplished with a single JDBC query. However, you also want to consider the characteristics of Entity Beans and their effect on the container’s performance. Although Entity Beans are fairly heavyweight components, the optimized transaction model is fairly efficient because Entity Bean instances are pooled...
and shared by the container for different client transactions. However, once an Entity Bean instance is pulled into a client transaction, it cannot be shared by another client until either the transaction ends or the container passivates the state of that instance for future use. This passivation comes at a cost and additional complexity because the container must activate the instance once again to complete the transaction later in a reliable, safe manner. Considering that the Entity Bean components have a relatively large fixed cost and that there may be many different types in a complex application, you want to size the component pools appropriately and find a balance between resource consumption and large amounts of activation and passivation that can slow down the application server. With all of this being said, if you can avoid using an Entity Bean for pure data retrieval, it is worth doing it. Perhaps not for that individual transaction, but it will aid the scalability and throughput of the overall application under a heavy user load. This comes back to the analysis of performance measurement that first starts at the individual transaction level, but then has to consider the effect on the overall application performance.
This concept is also in line with the idea of using business objects only for transactional updates as opposed to requiring that they be used for data retrieval as well. Thus, if your application deals with a collection of objects, it is perhaps best to first run the query using JDBC, similar to the ObjectList utility class. You can then iterate through the collection and instantiate or look up the Entity Bean equivalents when you want to perform a transaction update on a given instance. In the cases in which you do not update the entire collection, you can gain the greatest benefit from this technique. The database lookups for an $n$ size collection are then somewhere between 1 and $n + 1$, depending on the particular circumstances of the transaction. You can also compare this to an aggressive-loading Entity Bean strategy that theoretically limits you to a single database query but then has the overall cost associated with using a significant portion of the free instance pool. In other words, you sacrifice the overall transaction throughput for the benefit of the individual transaction in a heavy user load setting. Note that if the transaction volume is quite sporadic for a given application, an aggressive-loading strategy for Entity Beans may be the better solution because the assumption of fewer concurrent transactions is made; thus the cross-user impact is limited.
**Disk and Database I/O**
Often, the first thing to look at when tuning an application is the amount of database and disk I/O because of its relative cost compared to regular computational cycles. Thus, look to minimize the amount of database calls and file read/writes in your application. The first strategy to do this is usually to analyze the access patterns and identify redundant or unnecessary database access. In many cases, a significant benefit can be derived from performing this step at the design review and code review stages of a project. Eventually, your application approaches the minimum level of access required, and then you need to look to other techniques to make further improvements, which is where data caching comes into play.
Data caching commonly refers to storing application data in the memory of the JVM, although in general terms, it could also involve storing the data somewhere closer to
the client or in a less costly place than the original source. In a sense, you can refer to data stored in the HttpSession of a Web application as being cached if you are not required to go through an EJB component and to the application database to get it. In practice, the HttpSession could be implemented by the container through in-memory replication or persistent storage to a separate database, although, in both cases, the access time to get to the data is likely less than it would be to go to the definitive source. Now, of course, the definitive source is just that, and you need to be able to refresh the cache with the updated data if it changes and your application requirements dictate the need, which is often the case. In the Business Object Architecture chapter, a solution for this issue was looked at in the J2EE architecture using JMS as a notification mechanism for caches living within each of the distributed, redundant application server instances. Remember that even this approach has a minor lag time between the update of the definitive source and the notification message being processed by each of the caches. This may still not be acceptable for some mission-critical applications; however it does fit the needs of many application requirements.
The reference architecture uses an XML configuration file for application metadata, and many applications use a set of configuration values coming from a properties file. This type of data is a perfect candidate for caching because it does not change frequently and may not even require a refresh mechanism because changes to this data often require a redeployment of the application.
The use of Entity Beans to cache data should also be addressed here. Whereas Session Beans are used to deal with the state of a particular client at a time, Entity Beans represent an instance of a persistent object across all clients. So how much can you rely on Entity Beans to help with caching? Unfortunately, the benefit is not as great as one might think. Although an instance of an Entity Bean can be shared across clients, the same issue of updates to the definitive source applies here. If you deploy your EJB components to a single instance of an application server, then you can, in fact, take full advantage of this caching. However, most significant deployments wish to use the clustering and load-balancing features of the application servers, so multiple instances are deployed and the cached Entity Bean must consider the possibility of updates by another copy of that Entity Bean in another instance. Thus, in a clustered environment, the ejbLoad method must always be invoked at the beginning of a transaction to load the current state and ensure data integrity.
**Object Caching**
The concept of caching can also be applied to objects that are relatively expensive to instantiate. In a J2EE environment, this can include such objects as the JNDI Initial Context and the EJB Home interfaces. In your own application, you may also have complex components or objects that are expensive to instantiate. Some examples of this might be classes that make use of BeanShell scripts or other external resources that involve I/O, parsing, or other relatively expensive operations. You may want to cache instances of these objects rather than instantiate new ones every time if one of the following requirements can be met:
- Objects can be made thread-safe for access by multiple concurrent clients.
- Objects have an efficient way to clone themselves.
JNDI Objects
Relatively speaking, the JNDI operations can be somewhat expensive for an application. The creation of a InitialContext and the subsequent lookups for EJB Home interfaces should be looked at as a performance consideration. If your application does not use a large number of EJB, this may not be worth any further thought. For example, if your business logic is encompassed within Session Beans and you typically have only one EJB lookup in a transaction, it may not be worth the trouble to try and optimize this step. However, if you have a large number of Entity Beans used within a given transaction, it can make a noticeable difference if you can avoid the creation of an InitialContext and subsequent JNDI lookup for each component. Caching the JNDI objects should be used with caution, as there are a number of potential impacts to consider. The InitialContext object can be created once, such as on the Web tier in the controller servlet’s init method, and then used for all client requests rather than a new one created for each individual request. In a set of tests with heavy user loads, a single shared InitialContext instance did not present any problems; however, you should thoroughly test in your target environment to become comfortable with the approach.
Before looking at the EJBFactoryImpl code for an implementation of this solution, you should also consider caching the EJB Home interface objects. This technique can also provide a performance boost in some cases but should be used only after careful consideration. Many application servers provide a Home interface that is aware of the available, redundant application server instances. However, ensure that this is the case for your environment before using this technique. If you are going to reuse an existing Home interface, you don’t want one that pins you to a given instance, or you will lose all of your load-balancing and failover capabilities. The other aspect to consider of reusing the Home interface is that problems can result if one or more of the application server instances are brought up or down. A Home interface may become “stale” if the EJB server is restarted, and if instances are added or removed from the cluster, the existing home interface is likely not to be aware of this. In this sense, there also needs to be a refresh capability for the Home interface cache unless it is acceptable to restart the Web tier, or other such client tier, when a change is made to the EJB server configuration. This is likely to be a manual process unless a programmatic management capability can be introduced into the application.
Here are the relevant portions of EJBFactoryImpl that use a cached InitialContext and cached collection of EJB Home interfaces keyed by the object name. In the examples in this book, this class is always used in the context of an EJB tier underneath a service component deployed as a Session Bean. Thus, note that the InitialContext is created without any properties in a static initialization block. In order to be used by remote clients, this class would need to be modified to pass in the provider URL and context factory, but you can see the basic idea from this example. Each time the findByPrimaryKey method is invoked, the helper method getHomeInterface, which first looks in a collection of Home interfaces to see if the interface was already created and cached, is called. If it is not there, then it is created and stored for future use. This implementation uses a lazy-instantiation approach in which the first time through is a bit slower and then subsequent requests benefit from
the performance improvements. Alternatively, this initial cost could be incurred at server startup time:
```java
public class EJBFactoryImpl extends BusinessObjectFactory {
// Cached initial context
private static InitialContext jndiContext;
// Cached set of home interfaces keyed by JNDI name
private static HashMap homeInterfaces;
static {
try {
// Initialize the context.
jndiContext = new InitialContext();
// Initialize the home interface cache.
homeInterfaces = new HashMap();
} catch (NamingException ne) {
ne.printStackTrace();
}
}
/**
* Helper method to get the EJBHome interface
*/
private static EJBHome getHomeInterface(String objectName,
BusinessObjectMetadata bom)
throws BlfException {
EJBHome home = null;
try {
// Check to see if you have already cached this
// Home interface.
if (homeInterfaces.containsKey(objectName)) {
return (EJBHome)
homeInterfaces.get(objectName);
}
// Get a reference to the bean.
Object ref = jndiContext.lookup(objectName);
// Get hold of the Home class.
Class homeClass =
Class.forName(bom.getEJBHomeClass());
// Get a reference from this to the
// Bean’s Home interface.
}
}
```
home = (EJBHome)
PortableRemoteObject.narrow(ref, homeClass);
// Cache this Home interface.
homeInterfaces.put(objectName, home);
} catch (Exception e) {
throw new BlfException(e.getMessage());
}
return home;
/*
* Discover an instance of a business object with the
* given key object.
*/
public static Object findByPrimaryKey(String objectName,
Object keyObject)
throws BlfException {
// Obtain the business object metadata.
BusinessObjectMetadata bom =
MetadataManager.getBusinessObject(objectName);
// Get the Home interface.
EJBHome home = getHomeInterface(objectName, bom);
// Use the Home interface to invoke the finder method...
//
}
BEST PRACTICE For increased performance in applications that use a large
number of Entity Beans, consider caching the JNDI InitialContext and EJB
Home interfaces. This optimization should be encapsulated within the EJB business
object factory so there is no effect on business object client code. Many application
servers provide a Home interface that is aware of the available, redundant applica-
tion server instances. However, ensure that this is the case for your environment
before using this technique so you don’t lose the load-balancing and failover
capabilities of the application server.
Entity Beans
Many of the performance characteristics of Entity Beans have already been covered.
Although they are fairly heavyweight components, the container pools instances of
them, and the regular transaction model can be quite efficient. However, you can get
into trouble when the container is forced to perform large amounts of activation and passivation that can occur under heavy, concurrent usage. There are a number of other things to keep in mind. For example, when using remote interfaces, you want to minimize the amount of remote method invocation and RMI overhead. Thus, you use value objects to communicate data to the Entity Bean. You also want to avoid iterating through collections of Entity Beans through finder methods unless you can mitigate the risks of the \((n + 1)\) database lookup problem.
If you are using a Session Bean layer as a front to Entity Beans, similar to the reference architecture and the services layer, you should use local interfaces to access your Entity Beans. This avoids the overhead of RMI and remote method invocations. This forces you to colocate all related Entity Beans in a transaction in a given application server deployment, although this usually does not cause much of a problem unless you have a truly distributed architecture. In many cases, all of the beans are running in a standard centralized deployment for performance reasons and you can do this with ease. At this point, the biggest overhead left for each Entity Bean is the JNDI lookup to access the local interface, and there are options to address this given the earlier discussion of JNDI and object caching.
In many cases, Container-Managed Persistence (CMP) provides the best option in terms of performance for Entity Bean persistence. Bean-Managed Persistence (BMP) does suffer from a serious performance flaw in that a single lookup of an Entity Bean can actually cause two database hits. This problem is similar to the \((n + 1)\) problem if considered for a collection of one. The container needs to look up the bean using the primary key after a Home interface method is invoked. Once the component is located and a business method is invoked from the remote or local interface, the `ejbLoad` method, which typically uses application JDBC code to select the remainder of the properties from the database, is called by the container. In the container-managed approach, the container can optimize these steps into one database call. This is a serious consideration for using BMP in your Entity Beans. There are also many other cases in which the container can optimize how persistence is implemented, such as checking for modified fields before executing `ejbStore`. Finally, a major benefit of using Entity Beans is the object persistence service, so carefully consider the benefits of using BMP before taking this approach.
Another factor that can affect the performance of Entity Beans is the transaction isolation setting. The safest option is `TRANSACTION_SERIALIZABLE`, but it is not surprisingly the most expensive. Use the lowest level of isolation that implements the safety required by the application requirements. In many cases, `TRANSACTION_READ_COMMITTED` provides a sufficient level of isolation in that only committed data is accessible by other beans. Transactions should also be kept to the smallest scope possible. However, this can sometimes be difficult to implement using container-managed transactions because you can give each method only a single transaction setting for the entire deployment. Often, methods are used across different contexts in an application, and you would like the setting to be different in various situations. For this, you need to use bean-managed transactions and control this aspect yourself. However, a nice benefit of the Session Bean to Entity Bean pattern is that Entity Beans are usually invoked within a transaction initiated by the Session Bean. In this case, a transaction setting of `TX_SUPPORTS` works in most cases because a transaction will have already been initiated if needed.
Session Beans
Stateless Session Beans are the most efficient type of Enterprise JavaBean. Because the beans are stateless, the container can use a single instance across multiple client threads; thus, there is a minimal cost to using a stateless Session Bean both for the individual transaction and the overall application scalability. Remember that this is not always the case with Entity Beans due to the potential for activation and passivation. The container implementation also has the option to pool instances of stateless Session Beans for maximum efficiency.
A stateful Session Bean is particular to the client that created it. Thus, there is a fixed cost for the individual transaction that uses a stateful Session Bean. Stateful Session Beans are sometimes used as an interface to a remote client that maintains some state about the application. In a Web application, this type of state can usually be stored in the HttpSession, although stateful Session Beans are particularly helpful for thick-client Swing front ends. Note that it is important that the client call the remove method on the stateful Session Bean when it is done; otherwise the container will passivate it for future use, and this adds to its overall overhead.
BEST PRACTICE Be sure to remove instances of stateful Session Beans to avoid unnecessary container overhead and processing.
One thing to note is that some J2EE containers, particularly earlier versions, do not support failover with stateful Session Beans, although the major containers are now doing this. Make sure this is the case in your environment if this is a factor for consideration in your application.
XML
If an application does a large amount of XML parsing, it is important to look at the parsing method being used to do it. Two of the basic parsing options are the Document Object Model (DOM) and the Simple API for XML (SAX). DOM parsers require much more overhead because they parse an entire XML document at once and create an in-memory object representation of the XML tree. This is helpful if the program requires either significant manipulation or the creation of XML documents. However, if your application simply needs to parse through a document once and deal with the data right away, the SAX parser is much more efficient. It reads through a document once and invokes hook methods to process each tag that it comes across in the document. A document handler is written specifically for the application. It is a little more complicated to write because the hook methods are called without much of the XML tag context, such as the name of the parent tag. Thus, it requires the developer to maintain some state in order to correctly process the document if it contains any nested tags. However, the difference in speed can be noticeable for large documents. The reasoning for this goes back to the initial discussion on object creation and garbage collection. A DOM parser creates a large number of objects underneath the covers. The actual number of objects created is a factor of the number of XML nodes because objects are created for each attribute and text node of each element.
Many applications that use XML as a messaging or communications framework will want to manipulate the data in a regular Java object format. There are binding
frameworks such as the Java API for XML Binding (JAXB) that can be used to generate classes that can both extract their data from XML and write out their state as XML. These classes can be quite efficient because they know exactly where in the XML their properties belong and thus can avoid some of the overhead of a generic parsing API. These binding packages create a very powerful framework for exchanging data and dealing with it on both sides of a business service or process.
**BEST PRACTICE** If you use XML extensively throughout your application and performance is a concern, choose the most efficient parsing method available to you that meets your requirements. DOM parsers are usually the slowest due to the large number of objects instantiated underneath the covers and their generic nature. If your application simply needs to parse through a document once and deal with the data right away, the SAX parser is much more efficient. Binding frameworks such as JAXB will also be more efficient because they know exactly what they are looking for in the XML or what XML tags they need to create. These types of frameworks are also helpful because they use XML as a data transport but allow programs to access the data through objects.
**Asynchronous Processing**
Asynchronous processing is a strategy that can be used in certain circumstances to alleviate performance concerns. There are a limited number of situations for which this approach can be used; however, in the cases in which it is applicable, it can make a noticeable difference. Executing processes in parallel can be considered if any of the following conditions exist:
- Semi-real-time updates fit within the application requirements.
- There are a number of independent external applications to invoke.
- Application data and the relevant units-of-work can be partitioned.
Asynchronous processing can also be used to provide the benefit of perceived performance. For example, if a Web page is waiting on a response from a lengthy transaction, you may want to display the next page prior to the completion of the overall process to give the user the ability to continue work, thus increasing the perceived performance of the application. The next page might include a confirmation message, some intermediate or partial results, or else just a message informing users that they will be notified upon completion of the process, perhaps by email.
For a parallel processing approach to be effective, each asynchronous process needs to be significantly big enough to make the light overhead of a messaging framework, such as JMS, worth the benefit. One interesting thing to note about the J2EE environment is that JMS and Message-Driven EJBs are the only mechanisms provided to perform asynchronous processing. Strictly speaking, the EJB specification prohibits applications from managing their own threads. This makes sense when you think about the responsibilities of an application server. It is managing multiple threads for different types of components, and in order to effectively maximize performance and resource utilization, it requires control of the threads being run on a given machine. Thus, an application component cannot explicitly start a new thread in an object.
However, the Java Message Service provides a mechanism that goes through the container to invoke and start other threads. A message can be sent asynchronously from a client and a component that receives that message can process it in parallel with the execution of the original thread. This strategy is quite easy with the EJB 2.0 specification that provides a third type of Enterprise Bean, the Message-Driven Bean. This is an EJB component that is invoked when a particular type of JMS message is received. Thus, for asynchronous processing, a client can send a JMS message and a defined Message-Driven Bean can be used as a wrapper to invoke additional functionality in parallel.
**BEST PRACTICE** Consider the use of asynchronous processing to alleviate performance concerns in applications with semi-real-time updates, multiple external applications that can be invoked in parallel, or work that can be partitioned into segments. Use Message-Driven Beans and JMS to implement parallel processing in a J2EE container. Asynchronous processing can also be used to increase the perceived performance of an application.
### The Web Tier
JavaServer Pages and servlets are extremely efficient in that they are multithreaded components with a very small amount of overhead. These components provide very useful APIs and functions without causing much of an impact to the performance of the application. Unlike EJBs, little or no thought is required in order to use either of these components with regard to performance. The exception to this rule is of course the use of HttpSession, something that was alluded to numerous times throughout this book. This state maintenance option can impact the scalability and throughput of an application, so careful attention does need to be paid to its use. Nonetheless, the front end of the J2EE platform provides a very efficient, robust architecture for implementing high-quality Web applications.
### Best Practices for J2EE Performance Engineering
A summary of the performance best practices is given in this section.
### Considering Performance throughout the Development Process
A scalable, efficient architecture is essential for high-performance applications. Initial architecture reviews during the early stages of a project can be used to help benchmark and validate high-level performance requirements. Lower-level optimizations can be done later in the process. In general, spread the time spent on performance engineering throughout the process rather than wait until the week prior to deployment to run a load test. Remember that performance problems uncovered later in the process become increasingly more expensive to resolve.
Minimizing Object Instantiation Whenever Possible
Use lazy instantiation to delay object creation until necessary. Pay particular attention to objects that are serialized and sent over RMI. If you are invoking a remote Session Bean, try to send only the object data that is required for the component method.
Caching EJB Home Interfaces
For increased performance in applications that use a large number of Entity Beans, consider caching the JNDI InitialContext and EJB Home interfaces. This optimization should be encapsulated within the EJB business object factory so that there is no effect on business object client code. Many application servers provide Home interfaces that are aware of the available, redundant application server instances. However, ensure that this is the case for your environment before using this technique so you don’t lose the load-balancing and failover capabilities of the application server.
Removing Stateful Session Beans When Finished
Be sure to remove instances of stateful Session Beans when you are done with them to avoid unnecessary container overhead and processing.
Choosing an Efficient XML Parser Based on Your Requirements
The extensive use of XML in an application can have a noticeable effect on application performance. Choose the most efficient parsing method available to you that will meet your requirements. DOM parsers are usually the slowest due to the large number of objects instantiated underneath the covers and their generic nature. If your application simply needs to parse through a document once and deal with the data right away, the SAX parser is much more efficient. Binding frameworks such as JAXB will also be more efficient because they know exactly what they are looking for in the XML or what XML tags they need to create. These types of frameworks are also helpful because they use XML as a data transport, but you can program against the objects that receive the data.
Asynchronous Processing as an Alternative
Asynchronous processing is an option that can be used to alleviate performance concerns in applications with semi-real-time updates, multiple external applications that can be invoked in parallel, or work that can be partitioned into segments. Use Message-Driven Beans and JMS to implement parallel processing in a J2EE container. Asynchronous processing can also be used to increase the perceived performance of an application.
Summary
Performance should be considered throughout the development process. The initial focus is on developing a scalable architecture while lower-level optimizations can be saved until later. A typical approach involves a narrow but deep prototype, or proof-of-concept, which executes the communication through all of the layers and primary components of the architecture. Some basic load testing is done at this point to obtain basic performance metrics that help to validate both the high-level performance requirements and the proposed architecture. Performance should also be considered during the design phase because it often involves trade-offs against flexibility and other requirements. The application architecture and design should help to mitigate performance concerns by providing potential migration paths through the use of isolation and encapsulation. A key example of this concept is the use of a business object factory that provides a placeholder to optimize JNDI lookups without affecting the rest of the application code. Other key factors to consider when looking at J2EE performance include the use of Entity Beans and optimal pool sizes, choice of the right XML parser, and possibilities for asynchronous processing.
This chapter covered best practices for performance engineering in J2EE Web applications. The role of performance in the development process was considered and a number of techniques were discussed for the use of specific technologies such as Entity Beans, Message-Driven Beans, and XML. Whereas this chapter helped make your applications run faster, the next chapter addresses a number of best practices used to speed the development of your applications. These best practices focus on the topic of software reuse.
|
{"Source-Url": "https://cdn.ttgtmedia.com/tss/static/articles/content/BroemmerPerformance/chapter10.pdf", "len_cl100k_base": 11186, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 48801, "total-output-tokens": 12117, "length": "2e13", "weborganizer": {"__label__adult": 0.0002875328063964844, "__label__art_design": 0.00023543834686279297, "__label__crime_law": 0.00021898746490478516, "__label__education_jobs": 0.0004754066467285156, "__label__entertainment": 3.8743019104003906e-05, "__label__fashion_beauty": 0.00011813640594482422, "__label__finance_business": 0.00019359588623046875, "__label__food_dining": 0.0002448558807373047, "__label__games": 0.0003714561462402344, "__label__hardware": 0.000560760498046875, "__label__health": 0.00022351741790771484, "__label__history": 0.00015926361083984375, "__label__home_hobbies": 5.811452865600586e-05, "__label__industrial": 0.00024509429931640625, "__label__literature": 0.0001589059829711914, "__label__politics": 0.00015544891357421875, "__label__religion": 0.0002779960632324219, "__label__science_tech": 0.0027713775634765625, "__label__social_life": 4.845857620239258e-05, "__label__software": 0.00372314453125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002371072769165039, "__label__transportation": 0.0003590583801269531, "__label__travel": 0.00016582012176513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59693, 0.00112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59693, 0.50578]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59693, 0.93357]], "google_gemma-3-12b-it_contains_pii": [[0, 1317, false], [1317, 2081, null], [2081, 5214, null], [5214, 8836, null], [8836, 12150, null], [12150, 15365, null], [15365, 17477, null], [17477, 19126, null], [19126, 22327, null], [22327, 25195, null], [25195, 28642, null], [28642, 32208, null], [32208, 35734, null], [35734, 39359, null], [39359, 40893, null], [40893, 42451, null], [42451, 46259, null], [46259, 49568, null], [49568, 52826, null], [52826, 55512, null], [55512, 57933, null], [57933, 59693, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1317, true], [1317, 2081, null], [2081, 5214, null], [5214, 8836, null], [8836, 12150, null], [12150, 15365, null], [15365, 17477, null], [17477, 19126, null], [19126, 22327, null], [22327, 25195, null], [25195, 28642, null], [28642, 32208, null], [32208, 35734, null], [35734, 39359, null], [39359, 40893, null], [40893, 42451, null], [42451, 46259, null], [46259, 49568, null], [49568, 52826, null], [52826, 55512, null], [55512, 57933, null], [57933, 59693, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59693, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 59693, null]], "pdf_page_numbers": [[0, 1317, 1], [1317, 2081, 2], [2081, 5214, 3], [5214, 8836, 4], [8836, 12150, 5], [12150, 15365, 6], [15365, 17477, 7], [17477, 19126, 8], [19126, 22327, 9], [22327, 25195, 10], [25195, 28642, 11], [28642, 32208, 12], [32208, 35734, 13], [35734, 39359, 14], [39359, 40893, 15], [40893, 42451, 16], [42451, 46259, 17], [46259, 49568, 18], [49568, 52826, 19], [52826, 55512, 20], [55512, 57933, 21], [57933, 59693, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59693, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
0190e88177cb62b76703793f8b7b07ec1fd6d586
|
Towards an Ameliorated Approach for Design and Maturity of Cloud Service Technical Activities and Cloud Project Management by Overcoming the Service Scope Creep
Manu A R1,*, Shivanand M Handigund2, Manoj Kumar M3, Dinesha H A1, V K Agrawal4, K N Balasubramanya Murthy1, Nandakumar A N5
1Department of Information Science and Engineering, PES Institute of Technology, VTU, Bangalore, India
2Department of Computer Science and Engineering, M Tech, Vemana Institute of Technology, Bangalore, India
3Jain University, Bangalore, India
4Department of Information Science and Engineering, PES University, Bangalore, Karnataka, India
5Department of Information Science and Engineering, New Horizon college of Engineering, Bangalore, India
*Corresponding author: manu.a.ravi@gmail.com
Abstract In general the end service/product devised is offered as a service over cloud. The service is coded as a software service. Whereas the service that is coded as software project is time bounded endeavor relating the scope, risk, time schedule, cost investment, effort, in addition it involves human and computing resources. The service is devised as technical activities (TA’s) based on managerial activities as put forth by PMI. These managerial activities enhance the value of quality parameters with varying art . The au-courant velocity of the crowd computing service software design and development is derisorily low due to invertible use of cloud service managerial activities devised by PMI. The clairvoyant study reveals that the TA’s do not have proper predefined methodological existence to execute the service TA’s, it is under dogmatic effect. In the absence of multilateral experts for the service project management activities (PMA’s) devised by PMI, US is de-facto standard is used to upgrade the service quality core activities. Unfortunately the PMA’s to devise the core basic TA’s is under derision of foppishness. The multilateral collaborative stakeholders are ignorant, without proper vision, objective mission, it has become an ornamental show that there is a urgency and need to devise the core TA’s for developing of cloud service software. The TA’s are abstracted from by-products of the managerial activities. But this activities of cloud service development are devised by neglecting the core TA’s. There is no pari-passu between these two activities of devising the secured cloud services. Devising the TA’s and MA’s involves filling the tables devised by MA’s with the project service inputs without derision of foxiness. Since both TA’s and MA’s take same input and output base. The PMA is devised using 2-D space devised by 5 PLC’s, and 11 KA’s. The MA’s involves enhancing the TA’s quality parameters. Connecting to the computing system any service using any network, any place, anywhere, by any one at any time using any device by connecting to internetwork. Normally the secured end product of cloud computing system project service that develops the computing system software is a time bound endeavor involving scope, risk-schedule, human and system computing resources. Normally in cloud system SDLC, engineers and stakeholders adopt the secured technical activities based on the computing system managerial operational activities. These activities are designed by Software Engineering Institute (SEI), NIST and Project management institute (PMI). These are based on tables, metrics, standards, guidelines, rules is referred in secured computing systems managerial activities are used to identify the activities required for the ensuing secured software this is termed as computing system service technical activities. The secured computing managerial activities which enhance the ideals of these QOS factors are supported on shifting art by means of the computing system solid waste division of the crowd computing system; we explore the project scope management [6,7]. Using the computing system solid waste division of the crowd computing system, we explore the project scope management. Further we explicate scope creep versus progressive amplification, and discuss the common causes for scope creeping of the computing system product/project service scope creep management. Some steps to prevent the scope creep, using the case study and present the lessons learned from the case studies.
Keywords: cloud service project scope creep, cloud service project management, cloud service management, cloud service web crawlers, and cloud agility
Cite This Article: Manu A R, Shivanand M Handigund, Manoj Kumar M, Dinesha H A, V K Agrawal, K N Balasubramanya Murthy, and Nandakumar A N, “Towards an Ameliorated Approach for Design and
1. Introduction
The au-courant success rate of cloud computing system software development rate is derisory short. The root grounds for this short success rate is the inconvertible use of managerial activities developed by PMI for traditional computing systems it do n’t suit to present day Cloud computing systems. The study reveals that the managerial and operational activity that does not exist on its own. Also, there is no standard methodology to carry out technical activities which do not exist on its own. There is no clairvoyant methodology to carry out technical activities of the cloud computing system project service project. The use of managerial activities to derive technical, operational, and managerial activities in cloud computing ecosystem is small sauté the effects of the technical activities. In absence of the expert choose cloud service project management activities developed by PMI USA as de-facto standards [1] are used to enhance the quality of the core activities. Unfortunately at present many of the companies are using their own cloud system managerial activities to develop the core activity this is derision of foppishness. People are ignorant of utilizing their own vision, mission, and objective [7,8]. It has become ornamental that means they should use to develop the technical core activities since these are business and domain project dependent activities. The technical activities are abstracted from the by-products of managerial activities, thus the cloud service development projects neglected the correct core methodology for the development of the technical activities. Even there is no pari-passu between these two types of activities. Developing the TA’s based on the managerial activities, means filling up the tables developed by the managerial activities. With the service project input, these may at most develop the program not the software this is derision of foppishness. Both technical activities and managerial activities take same input and output base. These PM activities have been developed in 2-Dimensional space formed by 5 Cloud service project life cycle (PLC’s) and eleven knowledge areas (KA’s). The managerial activities enhance the quality parameter values. The enhancement needs existing quality parameter value which is attained by technical activities [9,10,11]. Currently there are no automated tools, techniques and methodology for the development of the technical activities required for CCS software development. This unit contains our developed methodology for the core activity based on the objectives and input feature. Managerial activities and technical activities are dimensionally orthogonal to each other; both should be transformed to the single-dimensional. The only methodology is to transfer it to the 3-dimensional, we are blending it together. As per the PMI glossary published PMBOK guide in 2008, and recent publications, Project scope management is defined as it inclusive of the process tasks related to make sure that computing systems project/product service. It includes the total work effort required by, and only the work effort required to complete the project/service successfully.
Scope creep (dimensional crawling): accumulation of features and product service functionality. Scope is devoid of addressing the result on rate, time schedule and computing resources and entity effort or without benefactor endorsement. Progressive elaboration with continuous integration and continuous development using dev-ops, ops-sec, sec-ops, agile methodology for continual improving using iterative comprehensive plan and more precise estimation prepared and availed as the computing system evolves. The system progression in the long run with its output of the exact and computation plans with an outcome of the product service with successive iterations of the forecasting progression. Scope creep is managed through the AD cover replacement and refactoring, with extended time duration to execute design services and fabrication management to improve the AD [12].
The main reasons for Scope creeping of the product/project service are due to derisory project planning related to: communication management, DR, risk and quality management. Unproductive PM: Stakeholder management, weakly documented scope, poorly managed service scope, undocumented exploratory [13,14] and ad-hoc postulations, fruitless monitoring and control procedure, processes leads to scope creep of the project management.
1.1. Preface
The cloud computing system project service management is buzz word in IT industry dealing with crowd computing systems like cloud service projects. Nowadays the success rate is pathetically low. The clairvoyant study [15,16] for the root cause analysis indicates that the software industries blindly consider the managerial activities developed by PMI as podium for the core activities i.e. technical and operational activities [17,18]. At present the academic and industry researchers [19,20] have not recognized precise technical activities obligatory for secure architectonic design and cloud service project management as their concept is based on individual project/service dependence i.e. on vision, mission, and the objectives and the input features.
1. To apply the managerial control on the architectural design and TA’s should be in place, currently without the technical activities they are using the [21,22] quality enhancement managerial activities to produce basic quality requirements. This paralogizes the CCS software development process,
2. 2ndly the stripped bare PMA are also not automatable, this adds to floundering struggle through the software process. Secondly most of PMBOK secretarial actions are not right for computing system software (s/w) development projects as the end software product is intangible. This necessitates the presence of SDLC stages.
3. 3rdly the realization of most of the tools and techniques suggested for PMBOK managerial
activities are left out as an expert judgement these methodologies have no scope in engineering [23,24,25].
4. Fourthly the number of technical and managerial activities is asymptotically bound by order of O(nm) therefore for blending these two types of activities there exist a necessity of decomposing the activities to micro and nano sized actions enable them to be free from the dimension [26,27].
5. Fifthly the project service involves gestalt of technical activities developed in SDLCS stages (tightly coupled activities) since these activities determine the state-of-art business process, and attain the quality parameters [28,29,30].
6. To avoid the scope creep due to management activities [31,32,33].
We are reverting the para-logized process with architectonic process in viz. In our case we are trying to build the methodology for the (abstraction) conceptual notion of technical activities abstracted from diverse input characteristics and QOS objectivities. Then blend the technical activities with the appropriate managerial activities as they are made dimensionally same [34,35].
Vision of this work is to eliminate embezzlement of managerial activities to generate technical activities. Mission includes designing and developing methodology for the abstraction of technical activities from the objectivities and inputting feature and blending them with equipollent managerial activities.
Objectivities include:
- To develop technical activities through the establishment of correspondence between the semiotics of input feature and objectives [36,37,38].
- To form blend of equipollent managerial and technical activities [39].
- To design and develop the reticulation of activities for CCS software projects.
The main motivation of this work is based on PMI though have developed well defined managerial activities in the point value from 2-D pace viz. PLC and KA. It does not help to identify technical activities required for the software projects. If the technical activities are derived from the table generated by managerial activities, there exist many pitfalls which can be witnessed in the au-courant CCS software development scenario. The recession in the software industries is a bottleneck in the utilization of software in all stages of CCS computing life cycle.
The very purpose of CCS professional managerial administrative activities is to augment and boost the excellence valued quality features of technical activities in place. Most of the managerial activities are left for multilateral expert judgment, it is an art. Technical activity is an engineering art should be transferred into engineering activity via science but these are not possible as art is not stable, there is no unique art to develop business process. The technical activities cannot be engineered with these types of volatile art; moreover the third pitfall is that both are dimensionally orthogonal to each other, which ruins either the bones of managerial activities or bones of technical activities. The fourth pitfall is that developing the technical, activities based on managerial activities means filling up the tables developed by the managerial activities with the project input these may utmost develop the program not the computing system software. The fifth pitfall is that PMBOK activities are mere managerial activities whose adoption is to enhance the quality of technical activities. The technical activities for other (other than software development) projects are fewer micro service oriented architecture design and loosely coupled CCS services. In such cases, the use of PMBOK [1] managerial activities in their entireties may be irrecusably essential task. For software development projects, the technical activities are either nonexistent and are currently carried out by arbitrary human skill with adherence to managerial activities. The technical activities are voluminous, project specific and are according to SDLCS stages. So for blending both types of voluminous, each activity should be made dimensionally equipollent. This is possible only if the correspondence between PLC phases and SDLCS stages maintains atomicity of activities or the decomposition is on common principles [30,40].
Managerial Activities
\[ \bigcup_{i,j=1}^{m} (PLC \text{ Phase}_{i} \cap \text{Knowledge Area}_{j}) \]
Technical Activities
\[ \bigcup_{i,j=1}^{m} (SDLC \text{ Stage}_{i} \cap \text{Knowledge Area}_{j}) \]
Operational Activities
\[ \bigcup_{i,j=1}^{m} (SDLC \text{ Stage}_{i} \cap \text{System Lifecycle}_{k}) \]
Technical/Managerial Activities
\[ SDLC \text{ Stage}_{i} \cap (PLC \text{ Phases } \cap \text{Knowledge Area}_{j}) \]
i.e.,
\[ \bigcup_{j \in \text{PLC Phases}} \bigcup_{i \in \text{SDLC stages}} \]
Next most of PMBOK cloud system administrative actions are not appropriate for cloud computing system software development/implementation of service product/projects as the end software product is intangible.
Sixthly the number of technical and managerial activities is asymptotically bounded by O(n) therefore for blending these two types of activities there exist a necessity of decomposing the activities to enable them to be free from dimension. The project involves gestalt of technical activities developed in SDLCS stages (tightly coupled activities). Since these activities determine the state-of-art business process.
The configuration management place vital role in its maintenance. Therefore, technical, management and operational activities forces all the activities need to be dimensionally same. This requires the composition of activities in different level of hierarchy. Determinations of activities along with their unit are to be blended together. Each management activities of PMI cannot be applied as they are holus-bolus and are partially carried out with technical activities which are scattered in different stages of SDLC. Lastly there is a need to determine managerial and operational activities for the CCS service software development projects which are equipollent to technical
and operational activities. Thus the technical activities are determined purely based on objectivities and input features. The semiotic gap between these two paves the way for design of technical activities. The equipollence between the activities can be created through their decomposition and reorganization with manageable unit activity. This is written as
**CCS Operational/Technical/managerial Activities**
\[
CCS \text{ SDLC Stages} \cap CCS \text{ PLC Phases}, \quad (15.6)
\]
The managerial activities cannot be effectively blended to the technical activities as all the attributes are dimensionally orthogonal to each other [42,43]. The dimensional equipollence is to be established before blending the two types of activities, these above pitfalls/lacunae’s motivated us to develop first methodology for the abstraction of technical activities based on the input features and the project objectives and then appropriate equipollent managerial activities are to be blended with technical activities. On the other hand, with clairvoyant study of PMBOK managerial activities, only the suitable activities for software development projects have to be identified and hiatuses created by unsuitable activities [44]. The managerial activities have been developed in 2 dimensional spaces of PLC phases and knowledge areas (KA’s). To optimally enhance quality of the project, the dimensional equipollency between both types of activities is established and each pair of pari-passu activities are blended together to form project activities. This means the technical activities are to be decomposed to be amenable for PLC’s and KA’s and the managerial activities are to be decomposed to suit SDLC stages.
2. Related Work
In literature review we can find that [1] PMBOK guide published by PMI US, has been well thought-out as a naïve de-facto standard norms and actions with 42 de-facto benchmark paradigm actions seeded beside with their plan I/O processes, with automation tools, technologies and techniques. These activities are amplified evenly all over the 2-D space fashioned by KA’s and PLC segments as endorsed in PMBOK. The treatise is fit for common traditional computing project activities but do not suits for secured cloud computing system service projects. Furthermore the practices used to comprehend, the majority of the actions are based on connoisseur multilateral expert judgements whose victory depend on expert’s in-depth insightfulness (profundity) and cannot shape the tactics (methodology). This demands a systematic amendment in PA’s suiting the cloud computing systems service project management. Moreover, PMBOK has identified only the general activities and not project TA’s. In this work an attempt is made to design the software project technical activities and associated managerial activities and placed them in 3-D space formed by 10 KA’s, 5 PLC’s and 7 SDLC’s. An attempt is also made to abstract software PA’s through the cloud service project inputs and resulting goal oriented objectives of the computing system project.
In reference [2] authors ascribe The general software service project management as ascribed in real-world existing guide by PMI in general has acknowledged only some of the PA’s. But it is stressed out merely on the professional managerial and administrative aspects of the software PA’s i.e. on software development project aspects of traditional software development projects but not on TA’s aspects. But these do not suit for present day complex cloud computing system mission activities and not on the methodological aspects. The actions offered are also too general to be developed and executed. In addition, the authors of PMI guide have not recognized the automation tools and techniques for automating the management processes for the identified activities in [2]. An author of [3] has described their experience of software system PM in the form of book chapters. Authors [3] well thought-out PA’s on computing system PLC and SDLC hypothesis suiting the information system. The info computing system maturity mission forms a division of computing system venture service. This does not suit the service project undertaken from the market demands, where the legal necessity is considered for development of firmware type. Henceforth the [3] cannot be applied for current modern computing system projects. Cloud computing service mission/system supervision is an incorporated scaffold has offered their episode/event occurrence on traditional computing system PM in the form of guiding channel. But this manuscript well thought-out PA’s on the SDLC hypothesis which might perchance outfit to computing statistics scheme of designing and development of computing system projects. This traditional information systems maturity of the project outlined is very little subset of computing system software assignment. This may perhaps not be handy to the service mission contracted and executed from CCS market demands, with legal obligation. In which the propose and expansion of a sort of virtual ware, firmware, software, hardware as a service is considered, consequently this book conceptions cannot be practical and functional for cloud computing software projects.
In book titled “Managing information technology projects”, has recognized the traditional PMA’s and positioned them into 2-D space shaped by system development life cycle, security life cycle, SDLC and PLC’s which is poor for on top of contests. On the other hand it pitches radiance on the panorama of the actions.
In work titled authors “SPM in practice” discussed active IT engineering practices on usual SPM in practice in his employed company. Since precise engineering industrial skill cannot be universalized, due to each engineering business have its own executive organizational practices, processes, resources assets, benefits, sources and CMM level. Additionally, most of the actions of IT engineering trade are data rights secluded and protected. As a result the performance actions of individual corporations cannot be globalized to the cloud computing business computing software venture. Hence the script is not helpful for abstracting business autonomous PA’s (project activities).
In the manuscript titled “Software Project Management”, authors [52], here the authors has identified and acknowledged some of the performance actions and activities according to the book. But the author has not
formed the topology of activities/actions and has not positioned in any of the project related space. More over no technical details are available for these activities.
The Prototype: It is utilized the approaches of PMBOK (Project management body of Knowledge) published by PMI it contains 47 de-facto standards and allied activities. IT presents MA’s in 2D space of 11 KA’s, 5 PLC’s phases. These are in very general in nature and its subsets suits for software development activities. An attempt is made to devise TA’s and MA’s of the computing software activities (SA’s), and place them in 5 PLC’s and 7 SDLC’s. Am attempt is made to spot the service relevant and inappropriate MA’s to suiting the software developing cloud computing systems from PMBOK PA’s. The authors of [2] as spotted few PA’s but stressed only on MA’s for SDP’s but not on TA’s. The actions as discussed in [2] it is to general to implement it. The author has failed to locate the automation tools, techniques, for the spotted activities. In his book the author stresses on TA’s for SDP’s for identification of TA’s. The author of [5] has located PMA’s using the 2D space of SDLC, and PLC is not clearly defined and distinguished. Pragmatically the PLC’s enclosed in SDLC. PLC and SDLC are orthogonal to each other. We may try to blend the TA’s with MA’s and place them in 3-D space using 10 KA’s, 5 PLC’s, and 7 SDLC’s. Authors of the references from [6] to [38] provides good source of the work on various aspects of soft engineering aspects project management and blending activities to ravel software engineering aspects beginning from SRS to testing the software engineering management. It presents the engineering management, SRS design, OOAD concepts, UML diagrams etc. References [39] to [50] helps to refer as guidelines to blend the various software engineering activities, it serve as source to references to devise good model for project management.
2.1. Taxonomy
Procedure to control the scope creep:
The core secured Cloud computing systems (CCS service) implementations are designed and managed using service SRS and SLA as the fundamental supporting document for service design. SRS and SLA’s generally comprises of the particulars of the secured implementation for the computing system. Through which appropriate secured service implementation can be designed, and developed and/or implemented at root level like architecture and design of the computing system. Securing at design and at architectural level makes the computing system more secured at root level, and helps to plan proper security and service product/project management at each layer of the complex cloud computing system. This details are potentially viewed with various pragmatics like computing tasks/jobs/work, job processes or use-cases respectively. The use-case is exclusive viewpoint whose summation total forms the tight security of the computing system[45]. This work, presents an effort made to abstract useful components from statements of all actors from unformatted unstructured SRS text. Though object oriented paradigm offers a boon to facilitate the development of the information system. It is a common observation that there is no single concrete methodology to design the scope creep management of the CCS project system. As a result, the development of the information system using scope creep management paradigm is a more of a human skill dependent than of systematic methodology. This research aims to bridge gap between the available SRS and the design of system through abstraction of object methods and useful components of activity diagram, sequence dia, class dia, and work process dia [46,47].
One should define and narrate in the documentation of the crowd computing project/service objectives by collating and collecting the service requirements documentation, SLA requirement management planning. In addition, Scope management plan aligned with project scope: Project/product objectives, service SLA requirements, QOS standards and procedures, crowd computing organization premeditated objectives. Also by performing the periodic service review of the service boundary base-lining using logs and service monitoring and controlling, based on project/product performance monitoring using variance analysis, and earned rate administration techniques. Apart from the above techniques it can also controlled using CI-CD integrated change control process, by reviewing, approving and or rejecting, by documenting and implementing the changes to scope dimensionality PM planning. By verifying the scope for formal receipt of completed CI-CD deliverables if rejected reason is documented for non acceptance. If in case any change requests are made also documented with proper notes and reason for the change requests made using punch lists. Upon the service/project closure phase final tasks preformed are project scope review, closure documentation, and final agile stand-up meeting with scrum (retrospection) meeting made to post-mortem lessons learned through out project service execution.
In Crowd computing system involves multilateral change control board which is formally represented all the multilateral stakeholders. They are accountable for all the activities involving estimating, verifying, validating, approving, waiting, or halting, or declining, changes to the project service, with all the CCB recorded multilateral decisions and proposals. Scope substantiation (SS) and QOS control: SS is primarily agitated with receipt of deliverables. Whereas, QOS control is major concern with correctness executed before SS is started.
Figure 1 shows the growth of Scope creep with respect to the tasks, budget and total duration in years plotted with range of 5 years duration.
SRS: software requirement specification is a record document authored in English language consisting detailed information in generally prepared by the client patron corporation including detailed overview of the computing system. It consists both functional and non-functional service requirements of the systems, the interacting actors interfaces, and prototypes, rules, POC’s and service constraints information etc [48].
The Syntactic taxonomy: The syntactic rules communicate the symbolic connotation in UML. The syntactic taxonomy encompasses model elements and its symbolic representations [49].
Semantics: systematizes the symbolic models and its elements into an evocative cluster according the semantics rules. It systematizes diverse signs into a significant entity as a ingredient of pragmatics [50].
Semiotics: Each idiom lingos (visual and pictorial verbal or programming) has able-bodied definite syntactic, semantics, and pragmatics.
The Pragmatics: practical are phrased to be current practices.
Project Service Vision: it is service foresight, of the premeditated service management prudent for future without any actions and activities involvement.
Service Mission: It is the specific job/task/process or purpose needed to realize the quoted vision [51].
Objective goal: It is broad service framework of realizing the activities of the mission as noted in Figure 2.
Service Strength: The factors of the characteristics which facilitates to ship input syntactic in the forward direction to objective syntactic.
SDLC (Software/System) Development Lifecycle: An architectonics design of actions (to be acceptable by the mission) inhibited in ordered junctures of the CCS software plan and maturity. It connotation is restricted and progression of computing info system.
Project service Life cycle phases (PSLC): These are bouquet of behaviour used in assorted progressive stepladders of the computing system projects. This SPLC stages viz. instigating, schedule and forecast planning, testing, concluding, supervising, and domineering.
Skill and Knowledge areas (KA’s): Based on the task the PM is grouped into diverse KA’s for cloud computing software development project services, the actions are bunched in 10 KA’s Viz. computing system Project service integration management, cloud service project scope and dimension management, cloud service project time schedule management. In addition, computing system project cost and outlay pricing and billing management, computing system project quality of service management, computing system project Human resource and computing systems resource management, project service communication systems and network management, project service risk, threat and security management, and project/service resources procurement management. Also, project service or system configuration management etc.
PA’s (project activities) conceded out throughout the evolution of the project service to accomplish the associated vision, mission, and its assorted objectives, this may comprises of technological, operational, and managerial activities.
Activity: The business progression consists of succession of assorted actions/activities. It is principal entity that is budding to partake in use cases, work /jobs/tasks/ procedure and effort [26].
Activity
= Work / jobs / tasks progression $\cap$ Use cases
= Work process (where work in SDLC stages).
2.2. Projected Methodology
2.2.1. Problem Proclamation of the Computing System Project
To plan, design, and develop to build an ameliorated methodology for the discovery of cloud service PA’s comprising and involving the blend of technological and professional administrative activities.
Input: Software requirement Specs (SRS) together with visualization, mission, and objectives of the computing project service.
Output: total reticulation of TA’s as specified absolutely in cloud service objectives.
Tools Devised:
- Directed control flow table (CFT)
- Data info flood table (DFT)
- Synonymies
- SWOT (Strength weakness, Opportunities, threats)
2.2.2. SRS
An SRS is an absolute explanation of the anticipated rationale function and ecosystem/environment for computing system software under consideration for development. The SRS also in general contains vision, task and objectives of the computing system project [33].
Summary of the proposed methodology: In this project we are aiming to develop a methodology for the design of technical activities from the input features and objective. The client provides the SRS to the strategic manager, abstract the vision, mission and objectives. From objective abstract the semiotics. Then identify the synonymies from the input features, then form the closure from the input to objective feature then perform the SWOT analysis by considering the prototype, then blend the technical activities with the appropriate managerial activities [34].
Functional requirements [35]:
- To abstract semiotics from each objectives
- To identify the synonyms base of the objectives,
- To abstract referenced –defined attributes from the input,
- To design CFT using ref-def table,
- To design DFT in the current flow order table (CFT).
- To identify the synonyms base of the input features,
- To perform closure operation from input feature to objectives,
- To perform SWOT analysis,
- To blend the technical activities with the appropriate managerial activities.
Non functional requirements (NFR) [30]:
At this occasion it is trialled to substantiate the QOS factors in CCS PM and to silhouette the reticulation of the PA’s.
Enterprise environmental requirements:
Every project is affected by some external factors time constraints.
Operating environment requirements: The clairvoyant logic reveals that their does not exist any methodology to carry out technical and managerial activities of the project. In the absence of experts choose the project management activities development of PMI USA as de-facto standards for the software development project service process. These PMA’s have been developed for the general purpose projects. These activities have been developed in 2-D space formed by the live project life cycle phases (PLC’s) and 11 KA’s. Normally, the management activities blended with the appropriate technical activities enhances the quality of technical activities. The mere use of the managerial activities with technical activities and without dimensionally blended results in derision. Presently this is the au-corant scenario in all our IT industries. Even these administrative actions are not absolutely apposite for cloud computing system software development service projects. Moreover many of the tools and techniques are left to the expert’s judgement which does not indicate the methodology or procedure. No one has developed the technical activities required for the software development project. There is a need to develop these technical activities based on the objectivities and inputs features, and blend them with the appropriate managerial activities to make them dimensionally equipollent to each other.
Performance requirements: To illustrate the software project management suffers from the lack of technical activities and passive managerial activities. This hindrances result in pathetically low success of the computing software projects, the lacunae exists, because the CCS software engineering concepts are not developed on strong foundation of mathematical rigor. CCS Software engineering concepts are not developed on strong foundation of mathematical rigor. Software project managements are reselected with introjections of the technical activities and appropriate refinement of managerial activities [51].
Standard Requirements: The standard requirements for the abstractions of the technical activities are: 1. we have to consider the objectives for the abstraction of the technical activities.
For the activities particular input and output need to be identified.
2.3. Quality Parameters in Software Project Management
Existing system could not satisfy quintuplet (correctness, completeness, efficacy, efficiency and robustness) features software project management.
Correctness:
Existing system:
- At present they are developing technical activities (TA) based on the managerial activities.
Managerial activities are compared to changing art there is long way to change from art to engineer, from this volatile art it cannot be transformed to engineering art. So we can’t derive the technical activities.
Managerial activities for the general projects are in the tabular from; they are trying to fix the technical activities to this tabular form.
Justification:
- We are developing the tactic for the discovery of TA’s first for these technical activities we are refining the managerial activities.
Completeness:
- Existing system: the project is complete if it meets all of its objectives. Currently objective has also because ornamental there is no process to authenticate the completeness of the project [52].
Here in our work we are identifying the technical activities based on the objectivities and the input features.
Robustness:
Existing system:
- In the existing system there is absence of the reticulation of the blended activities.
Justification:
- We are trying to reticulate the activities and blend them with equivalent dimensional equipollence.
Existing system:
Technical activities should be decided based on the objectives, at present they are developing the technical activities based on the managerial activities it is superficial hence the efficiency depends on human skill.
Efficiency:
Existing system:
They are adopting wrong policies, without quality.
We are trying to enhance the quality.
Technical activities and managerial activities are dimensionally orthogonal to each other, which ruins either the bones of managerial activities or bones of technical activities.
3. Procedure to Mould Cloud Computing System SRS
Cloud computing Software requirement Spec is a document written in a plain English language. It is a comprehensive manuscript primed by the punter corporation, using their computing information system which involves the complete synopsis of the computing system under consideration. It also contains the functional requirements and NFR’s of the web based cloud computing systems, its actor/entity interfaces, control constrictions, and the archetypes etc. On the whole this SRS is a assortment of business requirements of diverse clustered stakeholders (end users of the computing organizations). The cloud computing service SRS is a set of requirements compilation of diverse clustered end users of the corporations. The SRS encloses ingredient branch of the computing system corporation mission charter. These pilots to the official launch of the computing system service software development of projects control the niceties a propos the outlay, calendar and the mission to be developed. It is jointly prepared and blended by client and the organizations. The SRS contains number of synonyms moreover; each individual code the requirement naming the items with context specific names. Thus SRS contains names such that each name coded by different users will be with different meaning such name or name phases as termed as heteronym.
Input: SRS
Output: Moulded Statements, statements with statement numbers.
i) Read the SRS in sequence if the assertion/avowal of the decree are compounded with ‘a’, ‘and’ and ‘or’
then next decompose merged complex decrees into uncomplicated multiple decrees.
ii) Translate reflexive influence to dynamic voice so as to renovate supporting auxiliary supplementary and unelongated verbs into analogous parallel intransitive and transitive verbal representation
iii) Consign successively following numbers for each and every statement assorted.
3.1. Procedure for Moulding the SRS
SRS is a document includes FR, NFR’s, defining overview of the computing systems, described in plain English language, also it contain actor interfaces, restrictions, POC’s, prototypes etc. It contains the project charter, with cost, schedule, and all other details of the computing system. In many occasions it is multilaterally prepared and blended by service consumer and producer organizations.
Input: Software requirement spec
Output: cast statements, statements with numbered.
1. Scan and read the sequentially numbered statements, compounded with ‘and’ ‘or’ and later decompose the composite statements into simple tokenised multiple statements.
2. Convert the passive to active sentences voices, in order to transform the auxiliary and impersonal verbs, with intransitive to transitive verbs.
3. Itemize the consecutively succeeding statements.
Guidelines to design referenced-defined table
Referenced attributes on realization of the proclamation avowal if the assessment residue unaltered. Define the definite quality traits on the apprehension of the statement if the value remains altered.
Input: SRS statements with statement number.
Output: Referenced attributes, Defined attributes
Reorganize the statements such that the statements containing the defined attributes precede the statements containing those referenced attributes. This has to be represented in the tabular form as indicated below.
<table>
<thead>
<tr>
<th>Table 1. Reference Attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Statement number</td>
</tr>
</tbody>
</table>
3.2. Control Flow Graph
Procedure to design control flow table (CFT)
The English language has million word languages with lot of flexibility. The SRS is documented in English language. Normally this SRS is collection of requirements of different end users of the organization. These different requirements are coded in English language with the cultural flexibility. Thus the document includes different words with the same meaning. The collection of these words is termed as synonyms. The SRS contains number of synonymies moreover each individual code the requirement naming the items with context specific names. Thus SRS contains names such that each name coded by different users will be with different meaning such names or name phrase is termed as heterogenous heteronyms. There is a necessary to reorganize entire documents such that the attributes should be referenced only after defined. So there is a need for control flow graph.
(CFG): CFG is a directed rapt grid graphical G(V,E) symbolize everywhere v ∈ V, anywhere v is the contention or crowd together bunch of chronological statement of the program with one source of the program with one source vertex (in degree zero) with one or more destination vertices(out degree zero) and e ∈ E is the directed edge connecting v_i to v_j.
Set up the logic based commonsensical succession cycle of the SRS statements that is symbolized in the outline of (CFT)
Input: Ref-Def table
Output: CFT
Procedure: i. As and when the proclamation is interpreted, sort the QOS attributes in the lexicographical order as shown in table.
<table>
<thead>
<tr>
<th>Table 2. Intermediate table</th>
</tr>
</thead>
<tbody>
<tr>
<td>Attributes in lexicographic order</td>
</tr>
</tbody>
</table>
ii. Examine and interpret referenced-defined entries from the listed ref-def table.
iii. Create a 3 column as shown above table enter the assertion avowal item wise number in the right horizontal matrix rows of QOS traits at the accurate/precisely referenced or distinct vertical columns in matrix pilaster.
iv. If the defined entry follows the referenced entry, then delete defined entry and make the referenced number entry in intermediate table.
v. If the referenced entry follows the defined entry, then delete referenced entry and make the defined number entry in intermediate table.
vi. Sort the entries in the CFG with the jump1 or fly 1 as the primary key and begin as the secondary derived key from below table.
<table>
<thead>
<tr>
<th>Table 3. Start-jump table</th>
</tr>
</thead>
<tbody>
<tr>
<td>Start</td>
</tr>
</tbody>
</table>
vii. Enter the definite well defined and crystal clear proclamation with itemized number in the beginning of the matrix vertical column and the allusion proclamation number in the jump of the vertical column in table above.
viii. If the defined entry does not follows the referenced entry or the referenced statement/entry does not follow the defined entry then it is left unconsidered.
ix. If the final CFG, create table with 4 columns with the start, end, jump 1 and jump 2 as shown in table below.
<table>
<thead>
<tr>
<th>Table 4. Control flow table</th>
</tr>
</thead>
<tbody>
<tr>
<td>Start</td>
</tr>
</tbody>
</table>
x. In the final CFG the first primary column beginning include the proclamation of the start/launch feature/traits.
xi. In the ending CFG in the table above the second column includes the last statement quantized number.
xii. Inside the final CFG in the table above the second column contains the jump and alternative jump statement entries indicated by the sentence number of the referenced and defined columns.
xiii. Maintain the modus operandi formula till the final proclamation avowal of referenced or defined in table.
Figure 4. Data flow diagram through the CCS PMA and PA
3.3. Data Flow Graph (DFG)
Procedure to design DFT: We recognize the ref-def table in the control (ruling) stream order to form data flow table. DFG is a concentrating directed graph \(G(V,E)\) clinch the deposit proviso vertices \(v \in V\), such that where \(v_i\) represent referenced and defined attributes of the statement \(I\) and e \(\in E\) represents an edge with statement at the head of the arrow data dependent on statement at the tail of the arrow. In the intervening statement the definition of \(i\) is preserved.
Input: ref-def table, CFT.
Output: Reorganized ref-def table in the control flow order (DFT).
1. Steps for identifying DFT in control flow order.
2. Start with the entry number in the first column of the CFT refers entries from table above.
3. In each statement identify referenced and defined attributes in the appropriate column refer entries from table above.
4. Continue this procedure till the statement number referenced in the 2nd column of the CFT in the same row refers entries from table above.
5. Then start with the statement number in the 3rd column till it returns to the next statement. After the 2nd column statement number till it reaches the end of the program refers entries from table above.
6. Repeat this procedure for all the entries in CFT.
7. Reorganize the ref-def table in the control stream bid to form data flow table as indicated in the table below
<table>
<thead>
<tr>
<th>Statement number</th>
<th>Referenced attributes</th>
<th>Defined attributes</th>
</tr>
</thead>
</table>
Table 5. Data flow table
3.3.1. Procedure to Resolve Synonymies
The human always think in context specific, because of this we have flexibility in the English language different people use the same work in different meaning called synonymies.
Input: DFT.
Output: Synonymies in tabular form.
1. Read referenced attributes in tabular form.
2. Search for similar attributes entry in the DFT in above table.
3. Make appropriate entry in the synonymies referenced intermediate table as indicated in the table below.
4. Make the entry containing homogeneous attributes.
5. Continue this procedure till the end.
6. Repeat this procedure for next unmarked entry till all the unmarked entries are marked.
<table>
<thead>
<tr>
<th>Statement number</th>
<th>Referenced Attribute set in Lexicographic order</th>
<th>Statement number containing similar referenced attributes set</th>
</tr>
</thead>
</table>
Table 6. Synonymies Referenced Intermediate table
7. Repeat the steps 1 to 6 swapping defined and referenced attributes of DFT and make the entries in synonymies defined intermediate table below.
3.3. Abstraction of Technical Activities
3.3.1. Procedure for Abstraction of Technical Activities
Input: SRS objectives list of prototypes, CFT, DFT.
Output: Technical Activities.
1. Identify noun and noun phrases from each objectives list them in the table below.
2. If the noun or noun phrases is not person or place identify the person or place involved in the entity (noun or noun phrase) and then store them in semiotic table shown below.
3. Identify for each noun the synonymies (from the dictionary context) for each noun or noun phrases and then store them in synonymies table as indicated below.
4. Consider each synonymy, search for different element of SRS, If present identify the referenced element corresponding to the synonymies element.
5. Identify the closure vector with known initial item and final known defined items as indicated below in the fig below.
6. Search for the purpose of vector element in the list of the prototype and make the entries in the table below.
<table>
<thead>
<tr>
<th>Prototype name</th>
<th>Initial element</th>
<th>Intermediate element</th>
<th>Final element</th>
</tr>
</thead>
</table>
Table 12. Prototype Table
Figure 5. Shows the closure operation
From one prototype we search for the prototype from input to output. Initially start from the attribute and read till \( i+m \) attribute. For strength start from \( i+m \) attributes till it reaches \( i+m \) in the positive direction on reaching the output. For weakness start from \( i+1 \) attribute if it reaches the \( i-1 \) in the negative direction on reaching the output.
The SWOT analyses and identified as follows:
**SWOT analyses (Strength, Weakness, Opportunities, threats)**
**Strength**: The parameters of the input features which facilitates to move the input syntactic in the following direction to adjective syntactic.
**Weakness**: The Parameters of the input features that obstruct the facilitating that features towards meeting the objectives.
**Opportunities and Threats**: these are analogical to strength and weakness of the system available in the computing ecosystem.
- Identify the strength of synonymies of input with respect to the organizational process assets and the SRS by the features facilitating the activities. That moves the subset of each closure path in the forwarding direction towards meeting the objective synonymies as shown in the figure below.
- Identify the weakness of synonymies in input with respect to the organizational process assets and the SRS by the features abstracting the activities that move the subset of each closure path in the forwarding direction to meet the objectives synonymies as shown in the figure below.
- Identify the threats from the enterprise business environment and ecosystem factors that is abstracting leveraging of the input synonymies in moving the subset of each closure path in the forward direction to meet objective synonymies as shown below in the figure.
3.3.2. Managerial Activities
Software Project Management
The first level PA’s serve as basis for determining remaining PA’s and they are classified according to PLC phases, SDLC stages, KA’s. Software Development life cycle stages: In this work we considered 7 SDLC stages viz. requirements analysis, High level design, Lower level design, Unit code, integrated code, unit testing, integrated testing, acceptance testing, implementation, Delivering, software maintenance. Project life cycle Phases: Each project should follow 5 PLC phases viz. commencing Phase, scheduling/forecasting phase, Implementation and executing phase, scrutinize, supervising and domineering, and closing phase.
Project Management knowledge areas: Based on task the project management are classified into different knowledge areas for software development projects, the activities are clustered in 10 KA’s viz. Cloud computing system Project service Integration and administration, cloud computing system Project service Scope and dimension management, CCS Project service Time schedule management, CCS Project service outlay administration, CCS mission QOS value administration, CCS mission service HR and computing resource management, CCS Project service communication management, CCS Project Service risk management, CCS Project. Service procurement management, CCS Project service configuration management. The cluster of camouflaged activities forms the first level project technological activities
Technical Activity = SDLC Phase \( \cap \) PLC stage \( \cap \) KA
Differentiation linking CCS Software project and other computing system projects
\[
\text{Table 13. Cloud projects and other projects}
\]
<table>
<thead>
<tr>
<th>CCS Software Projects</th>
<th>Other Projects</th>
</tr>
</thead>
<tbody>
<tr>
<td>Syntactic => Words, prepositions, attributes</td>
<td>Syntactic => each part</td>
</tr>
</tbody>
</table>
Table 14. Showing syntactic and Semantic behaviour
<table>
<thead>
<tr>
<th>Syntactic => Words, prepositions, attributes</th>
<th>Syntactic => each part</th>
</tr>
</thead>
<tbody>
<tr>
<td>Semantic => statements</td>
<td>Semantic => combination of elementary parts</td>
</tr>
<tr>
<td>Pragmatics => module</td>
<td>Pragmatics => conjoining of parts serving a purpose</td>
</tr>
<tr>
<td>Behavior => The serving purpose of the product/service/result</td>
<td>Behavior => The serving purpose of the product/service/result</td>
</tr>
<tr>
<td>( \cup_{n=1}^{n} ) pragmatics where ( n ) varies for some purpose.</td>
<td>( \cup_{n=1}^{n} ) pragmatics where ( n ) will be fixed. There is no scope for change in behaviours</td>
</tr>
<tr>
<td>Components are not fixed</td>
<td>Component is fixed</td>
</tr>
<tr>
<td>Support configuration management using Dev-ops</td>
<td>Do not Support configuration management using Dev-ops</td>
</tr>
<tr>
<td>It gives scope for modification, since it as to obey the open close of good software principle. Thus the behaviour is linked to the list of pragmatics</td>
<td>There is no scope for modification, thus behaviour is array of pragmatics</td>
</tr>
<tr>
<td>Linked list varies with the human skill dependent</td>
<td>Array is not human skill dependent</td>
</tr>
<tr>
<td>The elements cannot be replaced: ( n ) varies it depends on the human dependent, it is difficult to identify the node in the linked list and it is difficult to reassemble</td>
<td>It is an element of the array does not work it can be replaced,</td>
</tr>
<tr>
<td>It is intangible</td>
<td>It is tangible</td>
</tr>
<tr>
<td>It cannot be easily detected and replaced, we take the support of configuration and management</td>
<td>Easily detected and replaced.</td>
</tr>
</tbody>
</table>
Table 15. Showing various steps of Invisibility factors
<table>
<thead>
<tr>
<th>Invisibility</th>
<th>Human Resources</th>
</tr>
</thead>
<tbody>
<tr>
<td>( n \approx \text{developers skill} )</td>
<td>Pragmatics = ( \mathfrak{f} ) syntactics, semantics where ( f ) varies with human skills</td>
</tr>
<tr>
<td>Pragmatics = ( \mathfrak{f} ) syntactics, semantics where ( f ) depends on product, service or results.</td>
<td></td>
</tr>
<tr>
<td>Pragmatics is not measurable</td>
<td>Pragmatics is measurable</td>
</tr>
</tbody>
</table>
Table 16. Showing Various Human resources factor
<table>
<thead>
<tr>
<th>Conformity</th>
<th>Pragmatics in software can be organized in number of ways</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pragmatics in not conformal</td>
<td>Pragmatics can be organized in a single way</td>
</tr>
<tr>
<td>Pragmatics is conformed</td>
<td>Pragmatics is conformed</td>
</tr>
</tbody>
</table>
### 3.2. Hazards of Developing Technical Activities Based on the Managerial Activities
The purpose of the administrative activities is to augment the QoS of the technological activities (TA’s) should be in place. Though the technical activities may depend on the CCS project service input and the aims of the project but there is at-least a possibility to design and develop a tactics strategy to obtain technological actions in the absence of such methodology the software developers are blindly following the (MA’s) to conceptual the TA’s.
Managerial activities need to quintuplet of quality parameter value of the TA’s to augment the QoS values. The harvest of the TA’s contains different updating version of the quality values. At different values of time the managerial activities are the activities those enhance the quality values from one level to the next higher level values in the consecutive order. Managerial activities are defined in 2-D plain comprising project life cycle phase’s axis and knowledge axis. Abstracting technical activities from the managerial activities is analogous to transform 2-D activities into 1-D activities. Which is impossible, in case if both the dimensional values vary in the order pair of 2-D value!. Either the first or the second value should be content then only the technical activities may be developed if we achieve dimensional equipollency.
### 4. Continuous Uninterrupted CCS Service Software Security Development Model
Continuous cloud computing system service software security progressive framework is to mitigate computing system security risks, in the initial stages at the paramount. In Cloud service consuming corporations and service producing ventures fabricate computing system software at faster pace on a clock cycle speed, due to marketing time is crucial for their business existence. At present software as a service speediness means computing business. Traditional security measures do not frequently perk up with the modern velocity of computing system software service delivery. Safety measures requirements to be assembled into the computing service software. Even prior to coding computing systems software, security is bolted to the computing system with its fabrication of the software. In general the designed security is breached separately by folks with malevolent purpose. This research case study investigation work confers a multilevel computing system Software Security ripeness model that is a synthesized with union of population, processes, plan, procedures and tools that enables applications and the data they process are secure.
Whilst the construction team is concentrating on sustainable deliverance, virtual cyber hackers or attackers labelled criminals are engaged on discovering flaws, errors, mistakes, vulnerabilities/loopholes and keep exploiting them for their benefit and gain to keep out the service vendor from offering the service. Defending application service on the cloud based Internet is a 24/7 trade. Whatever things needs to facilitate the customer is coupled to the Internet is an objective target of cyber abuse these times. The impacting of a defence episode can only be calculated precisely by measuring in terms of trouncing of assets, thrashing of reputation and share, disappointing burden on profits, annoyance of functions, or even insolvency in terms of economic failure and service disruption. The degree of cyber assaults has also broadened and amplified along with technological innovations and type of computing to embrace any size association and business firms across the industry. Whereas the firms mainly organizations ponder principally on computing system functionality, profit making and getting ROI and speed. Security is not a nucleus competency of computing systems coder. This in succession means that software, ahead finishing point, surround several vulnerabilities that masquerade threats signifying real-world risks with respect to web computing amounted to data or information breaches in 2016 in a survey report. Close at hand used to be an occasion when a plain firewall and anti-malware/antivirus software solution were all that was obligatory to maintain appliance secure on the Internet web which is insufficient for present
<table>
<thead>
<tr>
<th>Schedule</th>
<th>The software activities depend on the results of the previous software</th>
<th>Some parts can be prepared concurrently</th>
</tr>
</thead>
<tbody>
<tr>
<td>Order cannot be decided previously</td>
<td>Order can be decided previously</td>
<td></td>
</tr>
<tr>
<td>We have to use depth first search algorithm</td>
<td>No algorithm is required</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Table 18. Showing various steps of schedule factors</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tools and techniques used vary</td>
</tr>
<tr>
<td>We don’t know the infrastructure</td>
</tr>
<tr>
<td>Tools and techniques developed vary</td>
</tr>
<tr>
<td>Input varies</td>
</tr>
<tr>
<td>Cost estimation varies</td>
</tr>
<tr>
<td>Schedule increases risk also increases Probability α 1/risk</td>
</tr>
<tr>
<td>Organization process set varies</td>
</tr>
<tr>
<td>Enterprise environment factor varies</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Table 19. Showing various steps of various factors</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tools and techniques used vary</td>
</tr>
<tr>
<td>We don’t know the infrastructure</td>
</tr>
<tr>
<td>Tools and techniques developed vary</td>
</tr>
<tr>
<td>Input varies</td>
</tr>
<tr>
<td>Cost estimation varies</td>
</tr>
<tr>
<td>Schedule increases risk also increases Probability α 1/risk</td>
</tr>
<tr>
<td>Organization process set varies</td>
</tr>
<tr>
<td>Enterprise environment factor varies</td>
</tr>
</tbody>
</table>
security breaches. Fortification regularly commences by securing the cloud computing communications infra as a service, then with CCS podium for computing, and application developed over using both computing services apart with 360° security round the clock with geo fencing, and also by continuous refactoring using Sec-Dev-Ops processes. Primarily, nonetheless weak computing software leaves open doors for cyber attacks on critical data and computing info systems.
Although violations, infringes and security episodes have transpire an undividable ingredient of today's commerce, crops up because of frequently vulnerable simple or complex computing system software architecture design and coding flaws. It terms for a united approach pooling and innovating on engineering best practices, to computing software security at a tactical level. To administer, implement, employ and execute to gauge the advancement of the uninterrupted computing system software security (sec-dev-ops) journey. An amalgamation of dynamic, static, semi dynamic, semi static and hybrid appliance automated scanning tools pooled with coder/programmer education, standards, guidelines, and security acumen progress security in the SDLC. Continuous Delivery status impediments and delivery process main principles are: Automation process completely in cloud environment. If something wrong detected, it requires to do refactoring very often, on the definition of done applies for liveliness, with built in quality, implementing the improvements and innovations continuously with entire stakeholders accountable for the output. The benefit of security continuous refactoring includes shorter time to market and implementation with improved better quality of security implementation. Improved secured business confidence, with faster output, implementation at lowered operational costs. To comprehend these needs, the automated construction, verification and validation, operation, and service provisioning course of action plays a vital role ROI with room for improvement.
Model the five stage development model focal point on the vital elements of an institute that desires to be reactive to the marketplace changes. At the equivalent time focal point on less everyday progression that measured slows down the velocity of computing service software delivery. The primary essentials comprise tutoring, physical authentication, and automatic verification via dynamic and static appliance hybrid security testing, computing build integrations testing, security implementation requests, software functioning environment, and event response. New facades of computing system security are pioneered with succession throughout diverse intensities. With progression at the entire level, around is powerful hub on automation, and also manual confirmation verification, screen and continuous supervising helps in thwarting vulnerabilities. Various stages includes as follows
4.1.1 Fundamental (stage 1) for any corporation, the stage is likely lowest to be in relegate. Individuals are well-informed about essentials of information security, what ought to be prepared and what must not be done, for clearing up the system from the terrorization of the present virtual computing web cyber world and the real globe risks that arrive from the unsecured computing system. The stakeholders are educated for cyber defence hygiene. The Quality Analysts (QA's) concerned in computing system software development investigate for border line geo fencing cases everywhere they begin assessment with negative as regards what may perhaps go erroneous. There is no computerization defence testing present at this stage. The fundamental defence rules that are afforded by the computing system framework trader as a measurement of the paradigm quality checks that are made as a ingredient of the active SDLC process.
4.1.2 Investigative phase (stage 2) Being at the investigative probing level is a high-class position to commence. Software safety scheme can be designed. The stage starts with recognizing and grading data and application types that are at hand in the venture. From there, open with what matters on the whole. Recognize the crest list of appliances or locales that should to be protected and begin with preliminary application safety measuring guidance. Educate coders how to hack appliance so they can avoid them from conquest of hack in the authentic production environment. Physically analyze vulnerabilities for the vital vicinities. Focus on automating vulnerability investigation for the appliance parts uncovered via the GUI by repeated crawling methods. Computerize the course of scanning the crucial appliance resource code recurrently to ascertain and unearth vulnerabilities at the root architectural design and source code stage. Amalgamate the semi or full Dynamic App safety Testing (DAST) and semi or full Stagnant App Safety Testing (SAST) and/or hybridized technique of mutually DAST and SAST grades as a measurement of the computing system build productivity so coders can glance at them and identify with the varieties of threats that the appliance are up against.
4.1.3 Protected SOFTWARE incorporation (stage 3) the protected Software incorporation set in motion with a computerization and automated strategy as one of the large vital focus area. Train the coders with the proficiency of secure programming models through workshops. Assist them identify with crucial application controls like approval, cryptographic coding, encoding, decoding, inaccuracy management, and input substantiation, information encoding and managing alike. Perform episodic Weakness investigation and infiltration and breach Testing (Web Application Service Vulnerability Apps and Penetration Testing (WASVAPT)) on the application. Carry out cipher security code reassess. Describe programming guidelines and build them as guiding principles for peer to peer code checking processes prior to the code gets chequered into the source version control storehouse repo. Allow the coding principles be definite based on Industry guidelines like the OWASP Top 10, OWASP ASVS, cloud security alliance, NIST, SANS Top 25, guidelines identical. Automate (DAST) using standard tools. Facilitate the automated tools to memorize verification, validation and conference session, and end user/stakeholders details and agenda based automated checks to run numerous epochs a day. Run SAST at each coder commit to the source code repos. Fine-tune the automated tools to provide them as a good deal of context as to lessen the quantity of spurious
positives. Split the build for crucial key security findings. Mould the appliance for detection of terrorism and hazard agents. Launch architectural design level mitigations and articulate security requirements since recognized risks prior to the code are in reality written. Capture benefit of Continuous Integration atmosphere to discover and alleviate threats early on and frequently.
4.1.4 Safe and Sound SOFTWARE Deliverance (stage 4) at the protected Software deliverance stage, it is essential to encompass tradition and automated tools to aid us mitigate terrorism at a sooner pace. Self-protective architectural design and encoding are an extremely vital part of quicker release cycles. Perform systematically task specific appliance security education for all stakeholders involved in the software deliverance enterprise. Recognize and keep up the transforms that be prepared all through repeated discharge cycles. Perform WASVAPT on the functioning milieu. Preserve security stipulations for running atmosphere. Investigate the potential of function firewall to stop molest in bona-fide time. Mechanize DAST and SAST for the new transform that has been devoted while the past discharges rotations. As swiftly as the origin code is committed to the spring controlled depot, elicit the tests and split the build according to security policies. When the source code is committed is broken down, alert stakeholders a-propos the unbalanced codebase. Amalgamate effort tracking or virus, defect tracking systems to the trial results. Spot software conformity needs early on adequate to articulate them as preset security requirements. Preserve a catalogue of suggested software frameworks that are accepted to be used for software maturity. Make certain an event reaction progression is clear and in consign to react to molest as the appliance is resided in real production environment, install repeatedly and quicker, steadily.
4.1.5 Protected software release and supervising (stage 5): towards be revealed a Software safety champion, mechanization require to be pooled with continuous supervising and analytics. Consent role-based appliance defence inspection as a measurement of worker on-boarding modus operandi. Episodically appraisal functioning milieu with engineering yardstick benchmarks and watch the functioning atmosphere for base lining the constitution transforms. Consider measures to protect all the associated schemes and backend services together with third party sellers providing computing system as a service or services comparable. Security serviceable needs to be programmed, for instance constitutional rights precise function of an appliance. Prop up an ethnicity of inscription security regression tests for software defense fixes that are concerned. Recognize incessant enhancement areas, and pioneer secure mechanism at the software framework level. Amalgamate secure source code investigation tools at the coder IDE for quicker response. Consolidate result from copious security tools and continuously correspond the security potency of appliance. Dwell in progressive knowledge like bona fide Real time appliance security defence (RASD) to discover and react to coercion in real-time.
Safety measures are an endless scuffle that desires indeed to fight with intelligence and vigil. Even though executing the security in Software might resonance irresistible in the commencement, pilot stale with a software safety measures. Maturity program offer a design of the modern eminence and a boulevard for enhancement. Security computerization does not purge the need to bond manual re-examine. It assists focal point on vicinity where equipment has precincents and anywhere individual proficiency conveys mainly assistance. Festering ad-infinitum delivering software at rapidity does not inevitably signify more weakness convey at a quicker pace, but more Agile safety measures and resiliency built within the application accurate from the induction, persistently.
Geo-lattice (geo-fencing)
Geo-fencing (geo-fencing) is a trait in a software code that uses at the widespread (positional) location scheme (GPS) or radio frequency identification (RFID) to describe, label geological borders (precincts). Geo-fencing permit an supervisor to set up elicit so whilst a gadget pierce (or outlet) the precincts cleared by the supervisor, an alert is alarmed. Numerous geo-fencing appliances integrate Google Earth, permit bureaucrat to identify borders on zenith of a satellite vision of an unambiguous ecological area. Other appliance defines borders by longitude and scope or in the course of user -fashioned and Web-based maps. Geo-fence virtual barricades can be dynamic or reactive/flaccid. Dynamic geo-fences necessitate an end user to opt -into to locality services/forces and an itinerant app to be unbolt. Inert geo-fences are constantly on, as they rely on Wi-Fi and cellular statistics as a substitute of GPS or RFID and operate in the background.
The machinery has several realistic uses, include:
<table>
<thead>
<tr>
<th>Uses</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>buzz management</td>
<td>An admirable incident can use geo-fencing to build a provisional no-fly sector that thwart whirl from voyage a definite outer limits.</td>
</tr>
<tr>
<td>task force management</td>
<td>Geo-fencing can vigilant a reporter when a data traffic driver code flipside execution route.</td>
</tr>
<tr>
<td>Stakeholder and resource entity management</td>
<td>A workforce entity smart card will post an observant alarming message to defence if the worker try to penetrate illicit, geo-fenced locale vicinity.</td>
</tr>
<tr>
<td>conformity management</td>
<td>Set of connections logs can support geo-fence voyage crossing to manuscript the accurate use of automated tools and smart devices connected to network and their conformity with recognized guiding principles.</td>
</tr>
<tr>
<td>Advertising and marketing management</td>
<td>A petite commerce can transcript an opt-in patron a voucher code when the consumers Smartphone come in a definite physical area.</td>
</tr>
<tr>
<td>Resource management</td>
<td>A system administrator can locate and alerts so when an authorized smart device is stolen or foliage the boundary which is supposed to be in place, the admin can monitor the device's locality and padlock it to avoid it from being worn and utilized illegally</td>
</tr>
<tr>
<td>regulation enforcement</td>
<td>An ankle wristlet can on the alert establishment if an entity under residence seize cross or leave the location.</td>
</tr>
<tr>
<td>residence automation</td>
<td>When the residence proprietor Smartphone stolen or leaves the home geo-fenced outer limits or boundary.</td>
</tr>
</tbody>
</table>
Table 20. Showing various geo fencing factors
5. Conclusion
Either core activities or technical activities are developed from the managerial activities. The task of which is to enhance the quality of the core activities. This is derision of foppishness. In this work this has been eliminated by our new methodology for the design and development of core activities from the input features and the objectives. The synonymy issues have been amicably resolved. The technical activities and managerial activities are dimensionally different. In our work we have developed methodology for the attainment of dimensional equipollency. Thus through mathematical empiricism we have optimized blending of two types of activities to achieve quintuplet of quality parameter viz. correctness, completeness, robustness efficacious efficiency. Most of the tools and techniques of managerial activities depicted by PMI’s (PMBOK) are left to expert judgement are clairvoyant study shows that, the reticulation of just positioned interleaved technical and managerial activities is not known. So we attempted to develop the reticulation of just positioned interleaved technical activities and managerial activities which provide different updated versions of attributes to managerial activities in these cases the process of elimination of “Expert judgement”.
We also presented the effect of scope creep in project management using various case-studies, and illustrations
References
[1] Project management body of knowledge, PMBOK 4th and 5th edition, published by Project management Institute, USA (Reviewed once in 4 years).
|
{"Source-Url": "http://pubs.sciepub.com/jcn/4/1/4/jcn-4-1-4.pdf", "len_cl100k_base": 15266, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 49340, "total-output-tokens": 17536, "length": "2e13", "weborganizer": {"__label__adult": 0.00030112266540527344, "__label__art_design": 0.0008592605590820312, "__label__crime_law": 0.0003292560577392578, "__label__education_jobs": 0.0030307769775390625, "__label__entertainment": 8.547306060791016e-05, "__label__fashion_beauty": 0.00017213821411132812, "__label__finance_business": 0.0008602142333984375, "__label__food_dining": 0.0002903938293457031, "__label__games": 0.0008602142333984375, "__label__hardware": 0.0010633468627929688, "__label__health": 0.0003230571746826172, "__label__history": 0.00030303001403808594, "__label__home_hobbies": 0.00016069412231445312, "__label__industrial": 0.0004673004150390625, "__label__literature": 0.0003726482391357422, "__label__politics": 0.0002083778381347656, "__label__religion": 0.00034356117248535156, "__label__science_tech": 0.0271148681640625, "__label__social_life": 9.894371032714844e-05, "__label__software": 0.0117950439453125, "__label__software_dev": 0.9501953125, "__label__sports_fitness": 0.00021457672119140625, "__label__transportation": 0.00039768218994140625, "__label__travel": 0.00016748905181884766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 84723, 0.01431]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 84723, 0.51319]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 84723, 0.90048]], "google_gemma-3-12b-it_contains_pii": [[0, 4659, false], [4659, 10646, null], [10646, 16720, null], [16720, 23168, null], [23168, 29760, null], [29760, 32865, null], [32865, 38318, null], [38318, 40422, null], [40422, 46030, null], [46030, 46085, null], [46085, 50005, null], [50005, 51755, null], [51755, 55930, null], [55930, 63637, null], [63637, 70303, null], [70303, 77243, null], [77243, 84723, null], [84723, 84723, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4659, true], [4659, 10646, null], [10646, 16720, null], [16720, 23168, null], [23168, 29760, null], [29760, 32865, null], [32865, 38318, null], [38318, 40422, null], [40422, 46030, null], [46030, 46085, null], [46085, 50005, null], [50005, 51755, null], [51755, 55930, null], [55930, 63637, null], [63637, 70303, null], [70303, 77243, null], [77243, 84723, null], [84723, 84723, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 84723, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 84723, null]], "pdf_page_numbers": [[0, 4659, 1], [4659, 10646, 2], [10646, 16720, 3], [16720, 23168, 4], [23168, 29760, 5], [29760, 32865, 6], [32865, 38318, 7], [38318, 40422, 8], [40422, 46030, 9], [46030, 46085, 10], [46085, 50005, 11], [50005, 51755, 12], [51755, 55930, 13], [55930, 63637, 14], [63637, 70303, 15], [70303, 77243, 16], [77243, 84723, 17], [84723, 84723, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 84723, 0.20924]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
8e9ca7daaf0e2c31ab652e75addbb1e7d00e6642
|
STRAIGHT-LINE PROGRAMS: A PRACTICAL TEST (EXTENDED ABSTRACT)
I. S. Burnistrov,* A. V. Kozlova,* E. B. Kerpilyansky,* and A. A. Khvorost* UDC 519.256
We present two algorithms that construct a context-free grammar for a given text. The first one is an improvement of Ryttter's algorithm that constructs grammars using AVL trees. The second one follows a new approach and constructs grammars using Cartesian trees. Also we compare both algorithms and Ryttter's algorithm on various data sets and provide a comparative analysis of the compression ratio achieved by these algorithms and by the LZT7 and LZW algorithms. Bibliography: 15 titles.
1. INTRODUCTION
Nowadays, search algorithms on huge data sets attract much attention. Since compressed representations are convenient for storing and handling huge data sets, one of the possible ways to process huge volumes of data is to work directly with compressed representations.
Obviously, algorithms that process compressed representations depend on the compression mechanism. There are various compressed representations: collage systems [4], string representations using antidictionaries [11], straight-line programs (SLPs) [9], run-length encoding [1], etc. Text compression based on context-free grammars such as SLPs has become a popular research direction by the following reasons. The first reason is that grammars provide a well-structured compressed representation suitable for data searching. The second one is that the SLP-based compression is polynomially equivalent to the compression achieved by the Lempel–Ziv algorithm, which is widely used in practice. This means that, given a text S, there is a polynomial relation between the size of an SLP that derives S and the size of the dictionary stored by the Lempel–Ziv algorithm, see [9]. It should also be noted that the classical LZ78 [15] and LZW [13] algorithms can be regarded as special cases of grammar compression. (At the same time, other compression algorithms from the Lempel–Ziv family, such as LZ77 [14] and the run-length encoding, do not fit directly into the grammar compression model.)
There is a wide class of string problems that can be solved in terms of SLPs. This means that the execution time of such an algorithm depends polynomially on the size of the SLP. For example, the class contains the following problems: Pattern matching [6], Longest common substring [7], Counting all palindromes [7], some versions of the problem Longest common subsequence [12]. At the same time, constants hidden in the big-O notation for algorithms on SLPs are often very large. Also, the aforementioned polynomial relation between the size of an SLP for a given text and the size of the LZ77 dictionary for the same text does not yet guarantee that SLPs provide a good compression ratio in practice. Thus a major question is whether or not there exist SLP-based compression models suitable for practical applications. This question splits into two subquestions addressed in the present paper: How difficult is it to compress data to an SLP-representation? How large a compression ratio do SLPs provide as compared to classical algorithms used in practice?
Let us describe in more detail the content of the paper and its structure. Section 2 gathers some preliminaries about strings and SLPs. In Sec. 3, we present two SLP construction algorithms. The first one is an improved version of Ryttter’s algorithm [9]. The second one is a new algorithm that constructs SLP using Cartesian trees. In Sec. 4, we compare the efficiency of SLP construction algorithms and also present the results of comparing the compression ratio for all SLP-based algorithms and some classical compression algorithms. In Sec. 5, we summarize our results.
A part of the results of the present paper related to the improved version of Ryttter’s algorithm was presented at the 1st International Conference on Data Compression, Communication, and Processing held in Palermo, Italy, in 2011 (http://ccp2011.dia.unisa.it/CCP_2011/Home.html) and was announced in [2].
2. PRELIMINARIES
We consider strings of characters from a fixed finite alphabet Σ. The length of a string S is the number of its characters, and it is denoted by |S|. The concatenation of strings S1 and S2 is denoted by S1 · S2. A position in a string S is a point between consecutive characters. We number the positions from left to right by 1, 2, . . . , |S| – 1.
*Institute for Mathematics and Computer Sciences, Ural State University, Ekaterinburg, Russia, e-mail: burnistrov.ivan@gmail.com, vorozhe2e@gmail.com, Dembel@yandex.ru, jaamal89@gmail.ru.
It is convenient to consider also the position 0 preceding the text and the position \( |S| \) following it. For a string \( S \) and an integer \( i \) with \( 0 \leq i \leq |S| \), we define \( S[i] \) as the character between the positions \( i \) and \( i + 1 \) of \( S \). For example, \( S[0] \) is the first character of \( S \). The substring of \( S \) starting at a position \( \ell \) and ending at a position \( r \), \( 0 \leq \ell < r \leq |S| \), is denoted by \( S[\ell \ldots r] \) (in other words, \( S[\ell \ldots r] = S[\ell] \cdot S[\ell + 1] \ldots \cdot S[r - 1] \)).
A straight-line program (SLP) \( S \) is a sequence of assignments of the form
\[
S_i = \text{expr}_1, \quad S_2 = \text{expr}_2, \ldots, \quad S_n = \text{expr}_n,
\]
where \( S_i \) are rules and \( \text{expr}_i \) are expressions of the following form:
- \( \text{expr}_i \) is a character of \( \Sigma \) (we call such rules terminal), or
- \( \text{expr}_i = S_k \cdot S_r (\ell < r < i) \) (we call such rules nonterminal).
Thus an SLP is a context-free grammar in Chomsky normal form. Obviously, every SLP generates exactly one string over \( \Sigma^+ \). This string is referred to as the text generated by the SLP. For a grammar \( S \) generating a text \( S \), we define the parse tree of \( S \) as the derivation tree of \( S \) in \( S \). We identify terminal symbols with their parents in this tree; after this identification, every internal node has exactly two children.
Figure 1 presents the parse tree of the SLP
\[
F_0 \rightarrow b, \quad F_1 \rightarrow a, \quad F_2 \rightarrow F_1 \cdot F_0, \quad F_3 \rightarrow F_2 \cdot F_1, \quad F_4 \rightarrow F_3 \cdot F_2, \quad F_5 \rightarrow F_4 \cdot F_3, \quad F_6 \rightarrow F_5 \cdot F_4,
\]
which derives the 6th Fibonacci word \( abaababaabaab \).
In this example, the SLP derives a text of length 13 and contains 7 rules. In the general case, the \( n \)th Fibonacci word can be derived from the following SLP with \( n + 1 \) rules:
\[
F_0 \rightarrow b, \quad F_1 \rightarrow a, \quad F_2 \rightarrow F_1 \cdot F_0, \quad F_3 \rightarrow F_2 \cdot F_1, \quad \ldots, \quad F_n \rightarrow F_{n-1} \cdot F_{n-2}.
\]
Recall that the length of the \( n \)th Fibonacci word is equal to the \((n+1)\)th Fibonacci number, i.e., the nearest integer to \( \frac{\varphi^{n+1}}{\sqrt{5}} \) where \( \varphi = \frac{1 + \sqrt{5}}{2} \) (the golden ratio). Thus for some texts, their compressed representation using SLPs may be exponentially smaller than the initial text.
In the paper, we adopt the following conventions: every SLP is denoted by a capital blackboard bold letter, for example, \( S \). Every rule of this SLP (and every internal node in its parse tree) is denoted by the same letter with subscripts, for example, \( S_1, S_2, \ldots \). The size of an SLP \( S \) is the number of its rules, and it is denoted by \( |S| \). The height of a node in a binary tree is defined as follows. The height of a terminal node (leaf) is equal to 0 by definition. The height of a nonterminal node is equal to 1 plus the maximum of the heights of its children. We denote the height of a rule \( S_i \) by \( h(S_i) \).
A concatenation of SLPs \( S \) and \( S' \) is an SLP that derives \( S \cdot S' \), and it is denoted by \( S \cdot S' \). We would like to emphasize that concatenation of SLPs is not a rigidly defined operation (like concatenation of strings), since there are various ways to construct an SLP that derives \( S \cdot S' \) from the SLPs \( S \) and \( S' \). So a particular way of concatenating SLPs depends on the context of the problem under consideration.
3. SLP construction algorithms
3.1. SLPs, factorizations, and trees. The SLP construction problem can be stated as follows:
Problem: SLP construction.
Input: a text $S$.
Output: an SLP $S$ that derives $S$.
The problem of constructing a minimum-size grammar generating a given text is known to be NP-hard [3]. Hence we should look for polynomial-time approximation algorithms. One of the key approaches to such algorithms is to construct a factorization of a given text and to build some binary search tree using it. If we fix some factorization, then at each step an SLP construction algorithm can construct an SLP that derives a particular factor. Next the algorithm concatenates the SLP built at the previous steps with the SLP that derives the particular factor. It is obvious that such an algorithm depends on both the size of the text and the size of the factorization. Hence the SLP construction problem can be reformulated in the following way.
Problem: SLP construction using factorization.
Input: a text $S$ and its LZ-factorization $F_1, F_2, \ldots, F_k$.
Output: an SLP $S$ that derives $S$.
Rytter in [9] uses a natural factorization generated by the LZ77 compression algorithm as the main factorization. This choice ensures a polynomial relation between the size of an SLP deriving the text $S$ and the size of the LZ77 dictionary for $S$. Using the properties of the LZ-factorization, we get the following relation: the SLP constructed for a particular factor is contained in the SLP built at the previous steps. This relation substantially increases the efficiency of the construction.
Definition 3.1. The LZ-factorization of a text $S$ is the decomposition $S = F_1 \cdot F_2 \cdot \ldots \cdot F_k$ where $F_1 = S[0]$ and $F_i$ is the longest prefix of $S[F_1 \cdot \ldots \cdot F_{i-1}]$ that occurs as a substring in $F_1 \cdot \ldots \cdot F_{i-1}$, or $S[F_1 \cdot \ldots \cdot F_{i-1}]$ if this prefix is empty. The number $k$ is called the size of the factorization.
There is only one condition on the structure of the parse tree of an SLP: it is a maximal binary tree. This means that every internal node of an SLP has exactly two children (the term is taken from coding theory: it is clear that a binary prefix code is maximal by inclusion if and only if its binary tree is maximal in the above sense). There exist several types of binary trees. Which type is more suitable for the SLP construction problem? The algorithm proposed in [9] uses balanced trees, namely, AVL trees.
Definition 3.2. An AVL tree is a binary tree such that for every nonterminal node, the heights of its children differ at most by 1.
There is a bound on the height of an AVL tree logarithmic in the number of its nodes, see [5]. It is the main reason why this type of trees is used in Rytter’s algorithm. At the same time, the algorithm is nontrivial and resource-intensive. As an alternative, in Sec. 3.4 we consider an algorithm that constructs SLPs using Cartesian trees.
Definition 3.3. A binary search tree is a binary tree in which every node is assigned a number called a key such that the following properties are satisfied:
- the left subtree of a node $X$ contains only nodes with keys less than the key of $X$;
- the right subtree of a node $X$ contains only nodes with keys greater than the key of $X$;
- both left and right subtrees are also binary search trees.
A heap is a binary tree in which each node is assigned a number called a priority and for every node its priority is greater than the priorities of its children.
A Cartesian tree is a binary tree in which each node is assigned a pair of numbers: a key and a priority. Thus a Cartesian tree is a binary search tree with respect to keys and a heap with respect to priorities.
There is a probabilistic estimate on the height of a Cartesian tree logarithmic in the number of its nodes ([10], see Sec. 3.4 below). At the same time, an algorithm for constructing a Cartesian tree spends substantially less time on balancing nodes. It is interesting to compare how the choice of the underlying data structure affects the properties of the SLP returned by the algorithm.
3.2. Rytter's algorithm and its bottleneck. Rytter [9] proved the following theorem.
**Theorem 3.1.** Given a string $S$ of length $n$ and its LZ-factorization of length $k$, one can construct an SLP for $S$ of size $O(k \log n)$ in time $O(k \log n)$.
The proof of Theorem 3.1 contains an algorithm for constructing an SLP. We recall some key ideas of the algorithm, since they are important for the further discussion.
An *AVL grammar* is an SLP whose parse tree is an AVL tree. The key operation of the algorithm is the concatenation of AVL grammars. The following lemma provides an upper bound on the complexity of this operation.
**Lemma 3.2.** Let $S_1, S_2$ be two AVL grammars. Then we can construct in time $O((h(S_1) - h(S_2)))$ an AVL grammar $S = S_1 \cdot S_2$ that derives the text $S_1 \cdot S_2$ by adding only $O((h(S_1) - h(S_2)))$ nonterminals.
**Problem:** SLP construction using factorization.
**Input:** a text $S$ and its LZ-factorization $F_1, F_2, \ldots, F_k$.
**Output:** an SLP $S$ that derives $S$.
**Rytter’s algorithm:** The algorithm constructs an SLP by induction on $k$.
- **Base.** Initially, $S$ is equal to the terminal rule that derives $S[0]$.
- **Main loop.** Let $i > 1$ be an integer, and assume that an SLP $S$ that derives $F_1 \cdot F_2 \cdot \ldots \cdot F_i$ has already been constructed. Since the LZ-factorization of $S$ is fixed, an occurrence of $F_{i+1}$ in $F_1 \cdot F_2 \cdot \ldots \cdot F_i$ is known. The algorithm takes a subgrammar of $S$ that derives $F_{i+1}$ and obtains rules $S_1, \ldots, S_r$ such that $F_{i+1} = S_1 \cdot S_2 \cdot \ldots \cdot S_r$. Since $S$ is balanced, we have $t = O(log |S|)$. Using Lemma 3.2, the algorithm concatenates the rules in some specific order (see [9] for details) and sets the next value of $S$ to be equal to the result of concatenating the previous value of $S$ with $S_1 \cdot \ldots \cdot S_6$.
It is well known that maintaining the balance of an AVL tree is quite a difficult task. After adding a new node that breaks the balance of an AVL tree, the modified tree should be rebalanced using a local transformation called a *rotation*. There are two types of rotations. Both are presented in Fig. 2. Every rotation may generate at most three new nodes (such nodes are marked by primes in Fig. 2). Also, every rotation may generate at most three unused rules.

It follows from Lemma 3.2 that concatenating two AVL grammars with drastically different heights generates a lot of new nodes. Adding a large number of new nodes to an AVL grammar generates many rotations. In the main loop of Rytter's algorithm, the height of the current AVL grammar $S$ is constantly growing. At the same time, at each iteration $S$ concatenates with AVL grammars of relatively small height. The following example shows that the total number of rotations in Rytter's algorithm may be substantially greater than the optimal one.
Example 1. Let \( S = a^{2^n} b c^{2^n} \) where \( n \) is a fixed integer. Consider the LZ-factorization of \( S \):
\[
S = a \cdot a^2 \cdot a^4 \cdot \ldots \cdot a^{2^{n-1}} \cdot b \cdot c \cdot c^2 \cdot c^4 \cdot \ldots \cdot c^{2^{n-1}}.
\]
Let us denote the factors by \( F_1, F_2, \ldots, F_2^{n+3} \) in the order they occur in the LZ-factorization. Let \( F_1, F_2, \ldots, F_2^{n+3} \) be SLPs that correspond to the factors.
Let us estimate the number of rotations that can be generated in the sequence of concatenations \( \ldots \cdot (F_1 \cdot F_2) \cdot F_3 \ldots \cdot F_2^{n+3} \). No rotations are needed to concatenate \( F_1, F_2, \ldots, F_{n+1} \), since at each step we concatenate complete binary trees of equal height. So the parse tree of \( F_1 \cdot F_2 \cdot \cdot \cdot F_{n+1} \) is a complete binary tree of height \( n \), and the next concatenation \( (F_1 \cdot F_2 \cdot \cdot \cdot F_{n+1}) \cdot F_{n+1} \) generates an AVL tree of height \( n + 1 \). Obviously, each successive concatenation breaks the balance of the current AVL tree and generates at least one rotation. Thus the whole concatenation generates at least \( n + 1 \) and at most \( \Theta(n^2) \) rotations (the upper bound follows from the bound on the number of new nodes from Lemma 3.2).
Note that if the algorithm could choose the optimal order of concatenations, namely,
\[
((\ldots((F_1 \cdot F_2) \cdot F_3)\ldots) \cdot F_{n+1}) \cdot ((\ldots((F_{n+2} \cdot F_{n+3}) \cdot F_{n+4})\ldots) \cdot F_{2n+3}),
\]
then it would generate no rotations at all.
One of the possible directions for optimizing Rytter’s algorithm is to determine a “good” order of concatenations. Another one is to minimize the number of queries to an AVL grammar. Minimizing the number of queries to AVL grammars becomes important when the size of the input text becomes huge and we cannot store an AVL tree in the memory. Formally, this means that the cost of a query to an AVL tree is greater than the cost of computations using the random access memory. Our next example shows that several factors can be processed together if they occur in a single SLP.
Example 2. Let \( n > 0 \) be an integer and \( S = b \cdot a^{2^n-1} \cdot b \cdot a^{2^n-2} \cdot b \cdot a \). The length of \( S \) is equal to \( 2^n + n - 2 \).
Consider the LZ-factorization of \( S \):
\[
b \cdot a \cdot a^2 \cdot a^4 \cdot \ldots \cdot a^{2^{n-1}} \cdot a^{2^{n-2}} \cdot b \cdot a^{2^{n-3}} \cdot \ldots \cdot b a.
\]
Let \( S_1 \) be an SLP that derives \( b \cdot a^{2^n-1} \). It is obvious that all other factors starting with \( b \cdot a^{2^n-2} \) occur in \( S_1 \). Therefore, one can process them together. So we can construct an SLP \( S_2 \) that derives \( b \cdot a^{2^n-2} \), an SLP \( S_3 \) that derives \( b \cdot a^{2^n-3} \), etc. Finally, we can concatenate the SLPs in the following order: \( S_1 \cdot (\ldots (S_{n-3} \cdot (S_{n-2} \cdot S_{n-1}) \ldots)) \).
3.3. Optimization of Rytter’s algorithm. The main ideas of our improved algorithm are to process several factors together and to concatenate each group of factors choosing an optimal order. The intuition behind the algorithm is very simple: if it has already constructed a huge SLP, then most factors occur in the text generated by this SLP and can be processed together.
Modified Rytter’s algorithm. Using the input text \( S \) and its LZ-factorization \( F_1, F_2, \ldots, F_k \), the algorithm constructs an SLP \( S \) that derives \( S \).
Base. Initially, \( S \) is equal to the terminal rule that derives \( S[0] \).
Main loop. Let \( S \) be an SLP that derives the text \( F_1 \cdot F_2 \ldots \cdot F_i \) where \( 0 < i < k \). Let \( \ell \in \{1, \ldots, k-i\} \) be the largest integer such that each factor from the set \( F_{i+1}, \ldots, F_{i+\ell} \) occurs in \( F_1 \cdot F_2 \ldots \cdot F_i \). Since the LZ-factorization is fixed, the value of \( \ell \) can be obtained by a linear search on the factors. SLPs \( F_{i+1}, F_{i+2}, \ldots, F_{i+\ell} \) that derive the texts \( F_{i+1}, F_{i+2}, \ldots, F_{i+\ell} \) can be computed by an application of the subgrammar cutting algorithm (analogously to [9]).
Next, the algorithm concatenates \( F_{i+1}, \ldots, F_{i+k} \). It optimizes the order of concatenations using dynamic programming. Let \( \phi(p, q) \) be the function that is calculated by the following recurrence formula:
\[
\phi(p, q) = \begin{cases}
0 & \text{if } p = q, \\
\min_{r=p}^q (\phi(p, r) + \phi(r + 1, q) + \log(|f_{i+p}| + \cdots + |f_{i+r}|)) & \text{otherwise.}
\end{cases}
\]
The value \( \phi(p, q) \) is proportional to the upper bound on the number of rotations of a grammar tree that are performed during the concatenation of \( F_p, F_{p+1}, \ldots, F_q \). The upper bound follows from Lemma 3.2 and from the estimate on the height of an AVL tree from [5]. Typically, the upper bound is too large. So it is more correct to regard the function \( \phi(p, q) \) as a heuristic using which the algorithm obtains “good” groups of factors.
The algorithm fills an \( \ell \times \ell \) table with the values \( \varphi(p, q), 1 \leq p, q \leq \ell \). In the case where \( p < q \), it additionally stores the integer \( r \in \{p, p + 1, \ldots, q - 1\} \) on which the minimum of the following expression is reached:
\[
\varphi(p, r) + \varphi(r + 1, q) + |\log(|F_{i+r+1}| + \cdots + |F_{i+q}|) - \log(|F_{i+r+1}| + \cdots + |F_{i+q}|)|.
\]
The order of filling out the table is as follows: all cells \((p, q)\) such that \( p \leq q \) are set to be equal to 0, next the algorithm fills the cells such that \( q - p = 1 \), next it fills the cells such that \( q - p = 2 \), etc. Thus the algorithm does not recompute recursively the values \( \varphi(p, r) \) and \( \varphi(r + 1, q) \), since they already exist in the table. So every single value \( \varphi(p, q) \) can be calculated in time \( O(k) \). Figure 3 presents the pseudo-code of the corresponding procedure. Thus the algorithm fills out the table using time \( O(\ell^2) \) and space \( O(\ell^2) \).
\[
\text{result} = +\infty;
L = 0, R = |F_{i+p}| + \cdots + |F_{i+q}|;
\]
\[
\text{for } (\text{int } r = p; \ r < q; \ r + +) \{ \\
L+ = |F_{i+r}|;
R- = |F_{i+r}|;
tmp = \varphi(p, r) + \varphi(r + 1, q) + |\log L - \log R|;
\text{if } (\text{tmp} < \text{result}) \\
\text{result} = \text{tmp}; \\
\}
\]
Fig. 3. A pseudo-code that computes the value \( \varphi(p, q) \).
Finally, the algorithm reads the value of \( r \) from the cell \((1, \ell)\) and determines the order of concatenations for \( F_{i+1}, \ldots, F_{i+k} \) in time \( O(\ell) \). Using this order, it constructs an SLP \( F \) that derives \( F_{i+1} \cdot F_{i+2} \cdot \cdots \cdot F_{i+k} \). Finally, the algorithm concatenates \( S \) and \( F \) and sets \( S \) to be equal to \( S \cdot F \).
**Theorem 3.3.** Let \( f_1, f_2, \ldots, f_k \) be the LZ-factorization of a text \( w \). The above algorithm constructs an SLP for \( w \) of size \( O(k \log n) \).
**Proof:** Essentially repeats the corresponding part of the proof of Theorem 3.1, but we reproduce it for the sake of completeness.
Let us prove the theorem by induction on the number of factors. The base is clear. Assume that an SLP \( S \) that derives the text \( F_1 \cdot F_2 \cdot \cdots \cdot F_i \), where \( 0 < i < k \), is already built and has size \( O(i \log |F_1 \cdot F_2 \cdot \cdots \cdot F_i|) = O(i \log n) \). Let \( F_{i+1}, \ldots, F_{i+\ell} \) be the next factors that occur in \( F_1 \cdot F_2 \cdot \cdots \cdot F_i \). Let us consider subgrammars \( F_{i+1}, F_{i+2}, \ldots, F_{i+\ell} \) of \( S \) that derive the texts \( F_{i+1}, F_{i+2}, \ldots, F_{i+\ell} \), respectively. The height of \( F_{i+j} \) is not greater than \( 1.4404 \log |F_{i+j}| \), see [5]. Hence, by Lemma 3.2, the number of new rules that the algorithm adds at each step of constructing an SLP \( F \) that derives \( F_{i+1} \cdot F_{i+2} \cdot \cdots \cdot F_{i+\ell} \) is at most \( O(\log |F_{i+1}| + \log |F_{i+2}| + \cdots + \log |F_{i+\ell}|) = O(\log n) \). Each rotation of an AVL grammar generates at most three new rules. The total number of rules in the SLP \( F \) that are absent in the SLP \( S \) is at most \( O(\ell \log n) \). Analogously, the number of new rules that the algorithm adds during the concatenation of \( S \) and \( F \) is \( O(\log n) \). Hence the size of the SLP \( S \cdot F \) that derives the text \( F_1 \cdot F_2 \cdot \cdots \cdot F_{i+\ell} \) is \( O((i + \ell) \log n) \).
The time complexity of the modified Ryther's algorithm cannot be less than the complexity of the original algorithm from [9], since the latter is a special case of the modification described above when all groups are of size 1. On the one hand, the new algorithm generates less rotations, but on the other hand, it spends some extra time on calculating the order of concatenations. The cumulative influence of both factors on the execution time is unclear. In the next section, we propose a practical comparison of the algorithms under discussion.
**3.4. SLP construction using Cartesian trees.** As we have already noticed, SLP construction algorithms that use AVL trees spend a lot of time on balancing. We think that the following idea may be useful for solving the SLP construction problem: to replace the data structure used for representing SLPs with another one that would allow the algorithm to spend less time on balancing. In this section, we present an algorithm that construct SLPs using Cartesian trees.
There is a probabilistic bound on the height of a Cartesian tree that is logarithmic in the total number of nodes (see [10]). Namely, if the priorities of nodes are chosen at random, independently, and with the same distribution, then the expected height of a Cartesian tree with \( n \) nodes is \( O(\log n) \). Also, for every fixed constant
c with \( c > 1 \), the probability that the height of a Cartesian tree with \( n \) nodes is greater than \( 2c \ln n \) is bounded by \( n \left( \frac{1}{2} \right)^{-c \ln(c/e)} \).
To construct an SLP from an LZ-factorization, we need two operations: cutting a subtree with specified positions and concatenating two trees. For a Cartesian tree, it is easy to implement the following operations: split is the operation of splitting a tree into two subtrees with a specified position, and merge is the operation of merging two trees. But the standard implementation of the merge operation requires the following condition: every key of the first tree should be less than any key of the second tree. Hence it is necessary to regenerate the keys of the tree obtained after applying the split operation. This situation appears in the main loop of the SLP construction algorithm. After the algorithm has constructed a tree \( T \) that derives a prefix of the input text, it cuts a subtree \( T' \) of \( T \) that derives the next factor and applies the merge operation to \( T \) and \( T' \). Therefore, the algorithm should completely regenerate the keys of \( T' \) before merging \( T \) and \( T' \). To make this operation efficient, it is profitable to avoid explicitly storing keys. Next we explain why it is possible.\(^1\)
Let \( T \) be an arbitrary Cartesian tree, and assume that the information about its keys has been lost. One can recover the linear order relation on the keys using only the tree structure. The recovering algorithm recursively traverses the tree in the following order: the left subtree, the root, the right subtree. The number of the current node in this order is greater by one than the number of nodes in the subtree that the algorithm has visited before visiting the current node. Therefore, we are able to avoid explicitly storing the keys.
**Definition 3.4.** A Cartesian tree with implicit keys is a Cartesian tree that does not store the information about keys.
In what follows, we assume that the key of a node of a Cartesian tree \( T \) with implicit keys is equal to the number of the key in the linear order on all keys of \( T \). We denote the subtree of \( T \) with the root at a node \( T_i \) by \( T_i \), and the total number of nodes in this subtree, by \( \text{count}(T_i) \). If \( T_l \) and \( T_r \) are the left and right children of \( T_i \), respectively, then we use the following short notation for this fact: \( T_i = (T_l, T_r) \). It may happen that the nodes \( T_l \) and/or \( T_r \) are empty. For example, if \( T_i \) is a leaf, then both \( T_l \) and \( T_r \) are empty.
Let us describe an implementation of the split and merge operations for Cartesian trees with implicit keys.
**The split operation.** The input is a Cartesian tree \( T \) with implicit keys and a positive integer \( k \) where \( k \leq |T| + 1 \). The output is a pair of Cartesian trees \( L \) and \( R \) with implicit keys such that \( L \) contains all nodes of \( T \) with keys less than \( k \) and \( R \) contains all the other nodes of \( T \). By definition, the operation produces two empty trees on the input (empty tree, 1).
The algorithm starts from the root \( T_0 \) of \( T \) and works recursively. The following cases can occur:
1. **If** \( k \leq \text{count}(T_i) + 1 \), **then** \( T_0 \) lies in \( R \) and the algorithm splits the subtree \( T_i \). Assume that the split operation returns two trees \( L' \) and \( R' \) on the input \( (T_i, k) \). Then the algorithm returns \( L = L' \) and \( R = (R', T_r) \).
2. **If** \( k > \text{count}(T_i) + 1 \), **then** \( T_0 \) lies in \( L \) and the algorithm splits the subtree \( T_r \). Assume that the split operation returns two trees \( L' \) and \( R' \) on the input \( (T_r, k - \text{count}(T_i) - 1) \). Then the algorithm returns \( L = (T_l, L') \) and \( R = R' \).
We would like to emphasize that at each node \( T_i \) the algorithm stores the number \( \text{count}(T_i) \). Since at every step, the algorithm either terminates or recursively calls the split operation with a subtree of smaller height, the time complexity of the algorithm is \( O(\log |T|) \).
Since the parse tree of an SLP is a maximal binary tree, we should modify the split operation to guarantee that the resulting trees are maximal. To achieve this aim, it suffices to delete all nodes that have exactly one child from both output trees. Formally, if a node \( T_j \) has a single child \( T_k \), then we delete \( T_j \) from the tree. If \( T_j \) is the root, then we choose \( T_k \) as the new root after deleting \( T_j \). The priorities of nodes do not change.
Obviously, if the input tree \( T \) is maximal, then at each step of the algorithm, in each output tree \( L \) or \( R \) there is at most one node with a single child. Thus the time complexity of “maximizing” both trees \( L \) and \( R \) is \( O(\log |T|) \). In fact, a practical implementation of the maximization procedure does not require a separate pass through the output, since it can be integrated into the algorithm. In what follows, by the split operation we mean its modified version that returns maximal trees.
---
\(^1\)Unfortunately, the elegant idea of a Cartesian tree without explicitly stored keys has not yet been considered in the academic literature. A rather complete account of this idea is presented in the Internet publication [8] in Russian. We know for a certainty that it was first applied at an ACM programming contest in 2002 by N. V. Durov and A. S. Lepatun (members of the student team of the St. Petersburg State University).
The *merge operation*. The input is two Cartesian trees $T'$ and $T''$ with implicit keys. The output is a Cartesian tree $T$ with implicit keys that contains all nodes from both $T'$ and $T''$. By definition, if $T'$ is empty, then the operation returns $T''$, and vice versa, if $T''$ is empty, then the operation returns $T'$.
The algorithm starts from the roots $T_0'$ and $T_0''$ of the trees $T'$ and $T''$, respectively, and works recursively. Let $T_i' = (T_i', T_i''$) and $T_i'' = (T_i''', T_i''')$. Since the priorities of all nodes were chosen at random and independently, we suppose that they are pairwise distinct. The following two cases can occur:
(M1) If the priority of the node $T_0'$ is greater than that of the node $T_0''$, then the algorithm chooses $T_0'$ as the root of $T$. The left subtree is equal to $T_i'$, and the right subtree is equal to the tree returned by the *merge* operation on the input $(T_i', T_i'')$.
(M2) If the priority of the node $T_0'$ is less than that of the node $T_0''$, then the algorithm chooses $T_0''$ as the root of $T$. The right subtree is equal to $T_i''$, and the left subtree is equal to the tree returned by the *merge* operation on the input $(T_i', T_i'')$.
Since at each step of the recursion, the algorithm walks down either the left subtree or the right subtree, its expected execution time is $O(\log |T'| + \log |T''|)$.
As in the case of the *split* operation, we should modify the *merge* operation in order to use it in the SLP construction. There are two problems. The first one is that we should guarantee that the resulting tree is a maximal binary tree. The second one is that we should guarantee that the array of leaves of $T$ is the concatenation of the array of leaves of $T'$ and the array of leaves of $T''$. Both problems can be solved using the following simple modification of the algorithm.
Let $T_i'$ be the rightmost leaf of $T'$ and $T_i''$ be the leftmost leaf of $T''$. Note that the extreme leaves in Cartesian trees with implicit keys are defined unambiguously. Let $y'$ and $y''$ be the priorities of $T_i'$ and $T_j''$, respectively. Let $y_i = \min(y', y'')$ and $y_i = \max(y', y'')$. We set the priorities of $T_i'$ and $T_j''$ to be equal to $y_*$. Applying rules (M1) and (M2), the algorithm eventually reaches a configuration with the current roots $T_0'$ and $T_0''$ equal to the leaves $T_i'$ and $T_j''$, respectively. At this moment, the algorithm adds the new node $U = (T_i', T_j'')$ with priority $y_*$ and completes the construction of the tree $T$ by adding the three-element subtree $T_i' \lor T_j'' \lor T_i''$ instead of a two-element subtree $(T_i' \lor T_j''$ or $T_i'' \lor T_i'$). Clearly, the modified algorithm takes the same time $O(\log |T'| + \log |T''|)$ as the standard algorithm. It is easy to check that if the trees $T'$ and $T''$ are maximal, then the output tree $T$ is maximal too. Moreover, $T$ is a concatenation of $T'$ and $T''$ in the sense of SLPs, see Sec. 2. In what follows, by the *merge* operation we mean its modified version that returns a maximal tree.
We say that an SLP is a *Cartesian SLP* if its parse tree is a Cartesian tree with implicit keys. Now we introduce an algorithm for constructing a Cartesian SLP.
**Algorithm for constructing a Cartesian SLP.**
**Input:** a text $S$ and its LZ-factorization $F_1, F_2, \ldots, F_k$.
**Output:** a Cartesian SLP that derives $S$.
**Base.** Initially, $S$ is equal to the terminal rule that derives $F_1 = S[0]$. **Main loop.** Assume that a Cartesian SLP $S$ that derives the text $F_1 \cdot F_2 \cdot \ldots \cdot F_i$ has already been constructed for a fixed integer $i$ where $i > 1$. The factor $F_{i+1}$ occurs in the text $S = F_1 \cdot F_2 \cdot \ldots \cdot F_i$ by the definition of the LZ-factorization. Let $\ell$ and $r$ be positions in $S$ such that $F_{i+1} = S[\ell \ldots r]$. Let $\ell^*$ and $r^*$ be the priorities of the leaves $S[\ell]$ and $S[r]$ in $S$, respectively. Since the algorithm stores $\text{count}(S_i)$ in each node $S_i$, the values of $\ell^*$ and $r^*$ can easily be computed from $\ell$ and $r$.
The algorithm invokes the *split* operation with the input $(S, \ell^*)$. Let $R$ be the rightmost tree in the output. Next the algorithm invokes the *split* operation with the input $(R, r^* - \ell^*)$. The leftmost tree in the output is a Cartesian SLP $F$ that derives $F_{i+1}$. Finally, the algorithm invokes the *merge* operation with $S$ and $F$, and the output is a Cartesian SLP that derives $F_1 \cdot F_2 \cdot \ldots \cdot F_{i+1}$.
**Theorem 3.4.** The expected execution time of the presented algorithm on a text $S$ of length $n$ and its LZ-factorization of size $k$ is $O(k \log n)$. The expected size of the SLP returned by the algorithm is $O(k \log n)$.
**Proof.** At each step, the algorithm applies at most two *split* operations and at most one *merge* operation. It follows that the expected execution time of every step is $O(\log n)$. Since the algorithm consists of exactly $k$ steps, its expected execution time is $O(k \log n)$. 289
At every step of each operation (split or merge), the algorithm generates one new nonterminal rule. Since the time complexity of each operation is $O(\log n)$ and the operations are invoked $3k$ times in total, the expected size of the output SLP is $O(k \log n)$.
4. Practical results
4.1. The setup of the experiments. Obviously, the nature of input strings highly affects the compression time and compression ratio. In this paper, we consider three types of strings:
- DNA sequences (downloaded from the DNA Data Bank of Japan, http://www.ddbj.nig.ac.jp);
- Fibonacci strings;
- random strings over a four-letter alphabet.
These types of strings were chosen for the following reasons. Fibonacci strings are known to be one of the best inputs to the SLP construction problem. Thus they allow us to estimate the potential of SLPs as a compression model. Random strings are considered to be incompressible, and, potentially, they are the worst input to the SLP construction problem. DNA sequences form a class of well-compressed strings widely used in practice.
We compare the SLP construction algorithms presented in Sec. 3 with classical compression algorithms from the Lempel–Ziv family. Our test suite contains two implementations of the Lempel–Ziv algorithm [14]: an algorithm with small (32Kb) searching window and an algorithm with infinite searching window. The test suite also contains an implementation of the Lempel–Ziv–Welch algorithm [13]. The source code is available at http://code.google.com/p/overclocking/. All algorithms were run in the same environment on a PC with the following characteristics: Intel Core i7-2600, 3.4GHz, 8Gb operational memory, OS Windows 7 x64.
4.2. The experimental results. As expected, all SLP construction algorithms work infinitely fast on Fibonacci strings and construct extremely compact representations. For example, on the 35th Fibonacci word of size 36.9Mb, the algorithms return the answer within 1ms and build SLPs of size 100.
Figures 4–7 present the main experimental results on random strings and DNA sequences. For convenience, we adopt the following notation for algorithms:
- $\square$ – Lempel–Ziv algorithm with 32Kb search window;
- $\blacksquare$ – Lempel–Ziv algorithm with infinite search window;
- $\Delta$ – Lempel–Ziv–Welch algorithm;
- $\circ$ – Ryter’s algorithm from [9];
- $\bullet$ – modified version of Ryter’s algorithm from Sec. 3.3;
- $\triangle$ – Cartesian SLP construction algorithm from Sec. 3.4.
The performance of a compression algorithm is estimated in terms of the compression ratio and execution time. We calculate the compression ratio as the ratio of the size of the compressed presentation to the size of the input text, measuring it in per cent. For example, the formula for the SLP compression ratio looks like $\frac{|\text{compressed}|}{|\text{original}|} \cdot 100$. We also calculate the number of rotations for SLP construction algorithms that use AVL trees.
Figure 4 shows how the suggested modification of Ryter’s algorithm affects the number of rotations. Obviously, the modified algorithm uses substantially less rotations on texts of length more than 10Mb. Figure 4 shows that the suggested heuristic is efficient. It is very interesting that the number of rotations depends regularly on the size of the input text, while the execution time depends weakly on the nature of the input text for all algorithms. We have no theoretical explanation of these observations.
As discussed in Sec. 3.3, a gain in the number of rotations does not guarantee a gain in the speed of constructing an SLP, since the modified algorithm spends extra time on calculating the optimal order of concatenations. We compare the speed of all SLP construction algorithms using the following two tests. In the first one, the algorithms stored all SLPs being constructed in the random access memory, while in the second one, SLPs were stored in an external file, so that every rotation of an AVL tree forced I/O operations with the file. Figures 5 and 6 present the results of both tests on DNA sequences and random strings, respectively. It follows from the experimental results that the modified algorithm from Sec. 3.3 works several times faster than Rytter’s algorithm. The modified algorithm works two times faster on random strings and three times faster on DNA sequences if SLPs are stored in the random access memory. Also, it works five time faster on DNA sequences and three times faster on random strings if SLPs are stored in a file system. The algorithm that use Cartesian trees works faster than Rytter’s algorithm, but slower than the modified algorithm. The reason is that the heights of the constructed Cartesian trees are substantially larger than the heights of the corresponding AVL trees. The experimental results show that the average height of an AVL tree is equal to 21.8 and the average height of a Cartesian tree is equal to 47.8. Thus the Cartesian SLP construction algorithm processes more rules than the algorithms using AVL trees. This cancels the gain achieved from the simplicity of maintaining the balance in Cartesian trees.
Figure 7 presents the experimental results for the compression ratio achieved by SLP construction algorithms and by classical compression algorithms from the Lempel–Ziv family. We see that the algorithms using AVL trees achieve similar values of the compression ratio, which are twice less on the average than the compression ratio achieved by the LZW algorithm. It is interesting that the ratio of the compression ratios achieved by the algorithms using AVL trees to the compression ratio achieved by the LZW algorithm does not depend on the type and length of the input text. The compression ratio of the algorithm that uses Cartesian trees is substantially worse than the compression ratio of the other algorithms. In this case, we also observe that the ratio of the compression ratios weakly depends on the type and length of the input text.
5. Conclusion
Our experimental results show that both Rytter’s algorithm and the modified algorithm achieve the same compression ratio. But the running time of the second algorithm is substantially smaller. Since using a file system is inevitable with the growth of the input, it is worth noticing that the modified algorithm is more stable with respect to the growth of the input than Rytter’s algorithm.
Fig. 5. The SLP construction time on DNA sequences when SLPs stored in the random access memory (left) and in an external file (right).
Fig. 6. The SLP construction time on random strings when SLPs are stored in the random access memory (left) and in an external file (right).
In the paper, we present a Cartesian SLP construction algorithm. This algorithm has a similar execution time compared to the other discussed SLP construction algorithms, but provides a substantially worse compression ratio and the height of the output tree. This fact is important for searching algorithms that work directly with compressed representations. Thus our aim to improve the performance of SLP construction using an efficient data structure was not achieved. Now we think that this aim is hard to achieve. It appears that searching for new heuristics based on AVL trees that allow one to construct more compact SLPs is a more productive idea.
Fig. 7. The compression ratio achieved on DNA sequences (left) and on random strings (right).
All tested SLP construction algorithms are worse than the classical compression algorithms from the Lempel-Ziv family both in the achieved compression ratio and the execution time. SLP construction algorithms are of interest (at least from the theoretical point of view), since they provide a well-structured data representation that allows one to solve some classical searching problems without decompressing. However, the question on what volumes of input data SLP searching algorithms will be more efficient than classical string searching algorithms is still open. We think that it is one of the main research directions in this area.
ACKNOWLEDGMENTS
The authors would like to thank Professor Mikhail V. Volkov for his critical notes and continuous support. The authors would like to thank the anonymous referee for his remarks and suggested improvements to the original version of the paper.
The authors acknowledge support from the Russian Foundation for Basic Research, grant 10-01-00793.
Translated by the authors.
REFERENCES
|
{"Source-Url": "http://elar.urfu.ru/bitstream/10995/27423/1/scopus-2013-0575.pdf", "len_cl100k_base": 11809, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 67834, "total-output-tokens": 13635, "length": "2e13", "weborganizer": {"__label__adult": 0.0003581047058105469, "__label__art_design": 0.0003972053527832031, "__label__crime_law": 0.0005779266357421875, "__label__education_jobs": 0.0009765625, "__label__entertainment": 0.00012731552124023438, "__label__fashion_beauty": 0.0002092123031616211, "__label__finance_business": 0.00026035308837890625, "__label__food_dining": 0.0004229545593261719, "__label__games": 0.0008058547973632812, "__label__hardware": 0.0015230178833007812, "__label__health": 0.0007572174072265625, "__label__history": 0.00037479400634765625, "__label__home_hobbies": 0.00011897087097167967, "__label__industrial": 0.0005764961242675781, "__label__literature": 0.0005707740783691406, "__label__politics": 0.0003969669342041016, "__label__religion": 0.0006861686706542969, "__label__science_tech": 0.16064453125, "__label__social_life": 0.00010699033737182616, "__label__software": 0.01318359375, "__label__software_dev": 0.81591796875, "__label__sports_fitness": 0.0003254413604736328, "__label__transportation": 0.0005588531494140625, "__label__travel": 0.00020396709442138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46911, 0.01905]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46911, 0.77713]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46911, 0.87083]], "google_gemma-3-12b-it_contains_pii": [[0, 4736, false], [4736, 8399, null], [8399, 12555, null], [12555, 15537, null], [15537, 20616, null], [20616, 25492, null], [25492, 31165, null], [31165, 36271, null], [36271, 39749, null], [39749, 42668, null], [42668, 43601, null], [43601, 45433, null], [45433, 46911, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4736, true], [4736, 8399, null], [8399, 12555, null], [12555, 15537, null], [15537, 20616, null], [20616, 25492, null], [25492, 31165, null], [31165, 36271, null], [36271, 39749, null], [39749, 42668, null], [42668, 43601, null], [43601, 45433, null], [45433, 46911, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46911, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46911, null]], "pdf_page_numbers": [[0, 4736, 1], [4736, 8399, 2], [8399, 12555, 3], [12555, 15537, 4], [15537, 20616, 5], [20616, 25492, 6], [25492, 31165, 7], [31165, 36271, 8], [36271, 39749, 9], [39749, 42668, 10], [42668, 43601, 11], [43601, 45433, 12], [45433, 46911, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46911, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
59d88242ca012af9fbda19aabbeb0a53db743ae7
|
Program construction by refinements preserving correctness
G. A. Lanzarone* and M. Ornaghi
Gruppo di Elettronica e Cibernetica, Istituto di Fisica dell'Università di Milano, Via Viotti, 5-20133 Milano, Italy
This paper deals with the problem of constructing the final version $P^i$ of a flowchart program through successive refinements $P^3, \ldots, P^{i-1}$ that preserve correctness proved on its first version $P^1$.
Correctness conditions are associated with $P^1$ in the frame of Manna's formalism. With each refinement $P^j$ a relational (data) structure $S^j$ is associated, and given representation functions $\tau_j$ relate structures $S^i, S^{i+1}$ of refinements $P^i, P^{i+1}$.
Construction of $P^{i+1}$ from $P^i$ proceeds as follows: (a) every block of $P^i$ is considered as an elementary program over $S^i$ and its correctness conditions are expressed with terms of $S^i$; (b) these correctness conditions are translated by using the representation function $\tau_i$; (c) the translated correctness conditions are transformed into expansion conditions, expressed with terms of $S^{i+1}$ only, for every block of $P^i$; (d) by means of these expansion conditions, expansions of blocks are constructed and connected to obtain $P^{i+1}$.
The constructive character of the above process is emphasized with a detailed example.
In the appendix a discussion relates this paper to other works connecting constructivism and program theory.
(Received July 1973)
Writing a program to meet some specific demands, one is faced with two types of problems: (1) how to build such a program; (2) how to guarantee that the program provides the required features.
With regard to Point 1, the most recent trend (Dijkstra, 1968; Mills, 1971; Wirth, 1971; Woodger, 1971; Dahl, Dijkstra and Hoare, 1972) is to consider the construction of a program as a process of successive approximations, by means of a sequence of programs $P^1, P^2, \ldots, P^i$, where $P^1$ is an outline of the solution of the problem in hand, $P^2, \ldots, P^{i-1}$ are progressive refinements of $P^i$ and $P^i$ is the final version of $P^1$ in the chosen programming language.
As for Point 2, the approach to the problem has been to express the behaviour of a given program by suitable conditions (called correctness conditions), and to give some procedures which permit correctness verification by proving predicate calculus theorems (Floyd, 1967; Manna, 1969; 1969b); for a more complete reference, see review (Elspa, et al., 1972) and bibliography (London, 1970).
In this last approach, there is the difficulty of expressing the characteristics required of the program as correctness conditions, because of the discrepancy between the predicate calculus language and the programming language. Besides, the proofs are often cumbersome because the correctness conditions are imposed on the program when it is already written. A way out of these difficulties is to deal with the two types of problems mentioned above by making use of both program construction and program verification methods and by trying to unify them.
So far, only a few examples have been developed along these lines (Hull, Enright and Sedgwick, 1972; Jones, 1972).
The following situation appears when dealing with the problem of proving program correctness in parallel with the construction of the sequence of programs $P^1, P^2, \ldots, P^i$.
The correspondence between the characteristics required of the program and those designed into its first version $P^1$ is verified when proving the total correctness of $P^i$ (with respect to the input and output conditions which express the desired behaviour). $P^1$ must be rich enough to contain explicitly and precisely all the principal functions identified by the programmer as essential to an adequate solution of the problem. On the other hand, it mustn't contain details useless for this goal, to obtain better intelligibility and simplicity of verification, and not to put unnecessary restrictions on the successive refinements. During the construction of the final version $P^i$ through the sequence $P^2, P^3, \ldots, P^{i-1}$, the proofs no longer aim at expressing and verifying the correctness of the successively written programs, but at verifying that each refinement maintains the original characteristics required of the program.
This is in fact the situation examined in the present paper, which treats a method of construction of the refinement $P^{i+1}$ of a program $P^i$, maintaining the characteristics required and already verified in $P^i$.
The construction process takes into account the different features of each semantic level $P^i$, from the one closer to the problem and dealing with 'abstract' structures, i.e. structures which enjoy general properties, to the one expressed in a programming language, concerning itself with computer control management. The question is that of determining how to make such a refinement $P^{i+1}$ from $P^i$ with respect to the three aspects implied in every program: data, operations (functions and predicates), control flow.
The transition from data structures on which $P^i$ is defined to that of $P^{i+1}$ is made by means of a representation function that expresses the properties of such a refinement, i.e. the choices made by the programmer. On this basis, and with regard to the operations available at the successive level, each block of program $P^i$ is expanded into a subprogram of $P^{i+1}$, and the conditions are given under which such expansion is made correctly, that is according to the representation function chosen.
The conditions of correct expansion therefore turn out to be useful in practice as a guide to refinement constructions, and the representation function acts as documentation of the choices made by the programmer in program construction.
It is then shown that, for each level to maintain the correctness proved at the first level with respect to assigned conditions, both conditions of correct expansion and interface conditions which express the retention of the connections between blocks existing at the first level, must be satisfied.
The present paper is a result of a joint research project sponsored by CNR under contract no. 71.02104/75, and by Honeywell Information Systems Italia (HISI) (AST project—Quality Assurance Service of Software Engineering Section—Pregnana Milanese).
*Honeywell Information Systems Italia (HISI).
Volume 18 Number 1
The representation function here defined for refinements is similar to the simulation relation between programs given in Milner (1971). However, the simulation relation (expressed in an algebraic formalism) serves there to break down the correctness proof of a given program A into the proof of correctness (assumed easier) of a program B also given, and the proof that B simulates A; in this paper, instead, the method is centred on showing what procedures must be followed in constructing step by step the final program so that it is automatically correct (with respect to the characteristics required and proved at the first level).
The method is presented by using Manna’s formalisation of program properties in predicate-calculus. It is general in that it doesn’t consider specific models of data structures (these will be treated in future work); a case of characterisation of semantic levels and their hierarchical relationships is given informally in Dijkstra, (1968b).
To illustrate the presented method, an example is shown and discussed at the end of this paper.
1. Introductory definitions
In the Introduction we said that a program semantic level is characterised by the structure on which it is defined; more precisely:
Definition 1:
A structure is a triple: \( S = (D, \mathcal{F}, \mathcal{P}) \), where:
\( D \) is a domain (characterising data on which the program operates);
\( \mathcal{F} \) is a set of functions \( f: D \to D \) (total over \( D \));
\( I \in \mathcal{F} \) (I is the identity function);
\( \mathcal{P} \) is a set of predicates \( p: D \to \{0, 1\} \) (total over \( D \)).
Definition 2:
A program \( P \) on structure \( S = (D, \mathcal{F}, \mathcal{P}) \) is a flowchart with the following four types of statements (or blocks):
- (a): \( y \leftarrow f(x) \) (\( f \in \mathcal{F} \))
- (b): \( y \leftarrow g(y) \) (\( g \in \mathcal{F} \))
- (c): \( p(y) \) (\( p \in \mathcal{P} \))
- (d): \( z \leftarrow h(y) \) (\( h \in \mathcal{F} \))
with the condition that in the flowchart there is only one statement of type (a) (initial statement), one or more statements of type (d) (final statements), and none, one or more statements of type (b) (assignment statements) and of type (c) (test statements).
We will call \( x \) input variable(s), \( y \) program variable(s), \( z \) output variable(s) (\( D \) can be thought of as a space of scalars or as a space of \( n \)-ples: \( D \times D \times \ldots \times D \)).
Execution of program \( P \) is defined according to connections between blocks, in the usual way: given an input value \( x = \xi (\xi \in D) \), the initial statement assigns value \( f(\xi) \) to \( y \), then passes control to the next block; if this is an assignment statement, the current value of \( y \) is modified accordingly and control passes to the next block; if it is a test statement, the next block is selected depending on the value of the predicate \( p(\xi) \), and so on.
Computation terminates only if a final block is reached; in this case, the final value \( \eta = h(\xi) \) is assigned to \( z \) (\( \xi \) is the current value of \( y \) before execution of final block and \( h \) is the function it performs).
In this fashion, program \( P \) computes the function \( P: D \to D \) defined in the following way: let \( \xi, \eta \in D \); \( P(\xi) \) is defined if and only if the computation relative to the initial value \( x = \xi \) terminates with a final value \( z = \eta \) and \( \eta = P(\xi) \) holds.
Now let \( \phi: D \to (T, F) \) and \( \psi: \times D \to (T, F) \) be two (total) predicates such that: \( \forall x \exists z (\phi(x) \Rightarrow \psi(x, z)) \); they express the required input/output behaviour of the program. We use the following definition of total correctness given in Manna (1969a):
Definition 3:
A program \( P \) is totally correct with respect to the input predicate \( \phi \) and the output predicate \( \psi \) if and only if, for every \( \xi \) such that \( \phi(\xi) = T \), \( P(\xi) \) is defined and, making \( \eta = P(\xi), \psi(\xi, \eta) = T \) holds.
The theorem of correctness given in Manna (1969a) holds, which states that total correctness of a program can be reduced to the unsatisfiability (on structure \( S \) on which the program is defined) of a predicate-calculus formula uniquely associated with the program.
The usual techniques of both formal and informal verification (given for example in Floyd, 1967; Manna, 1969b; Maurer, 1972) hold too.
We assumed functions \( f \in \mathcal{F} \) and predicates \( p \in \mathcal{P} \) total over \( D \); however, the correctness theorem and the quoted verification techniques can easily be extended to functions and predicates having domains \( D(f) \subseteq D, D(p) \subseteq D \) decidable, by introducing the special statement
[Diagram of a flowchart with a loop]
(see Manna, 1970). This is important from our point of view, because in developing programs by successive refinements, functions and predicates are generally partial at each level; we are however allowed (since we consider only total correctness, not partial correctness) to assume that a sufficiently wide (possibly maximal) domain can be determined, over which functions and predicates are total.
Being that \( D(f) \subseteq D \), we must consider, in the block by block expansion process, some interface conditions, as will be specified in the following.
2. Relation between \( P^i \) and \( P^{i+1} \)
The aim of this section is to give a precise definition, based on the concept of representation function \( \tau_i \), of what is meant by: \( P^{i+1} \) is a correct refinement of \( P^i \) (when passing from one structure to another); we will use the expression: \( 'P^{i+1}' \) represents \( P^i \) with respect to (wrt) \( \tau_i \). Let \( P^{i+1} \) be a program over \( D_i \) (i.e. on a structure \( S_i \) having domain \( D_i \)) with input variables \( x_i \); program variables \( y_i \) and output variables \( z_i \); let \( P^{i+1} \) be totally correct wrt a given input predicate \( \phi(x_i) \), and to a given output predicate \( \psi(x_i, z_i) \); let \( P^{i+1+1} \) be a program over \( D_{i+1} \) with input variables \( x_{i+1} \), program variables \( y_{i+1} \) and output variables \( z_{i+1} \); in addition, let the following function be assigned:
\[ \tau_i: D_{i+1} \to D_i \]
We will call \( \tau_i \) representation function, in the sense that each element \( \eta \in D_{i+1} \) represents by means of \( \tau_i \) a unique element \( \tau_i(\eta) \in D_i \).
We have assumed that \( \tau_i: D_{i+1} \to D_i \) is a total function over \( D_{i+1} \); if \( \tau_i \) is a partial function over a domain \( D(\tau_i) \subseteq D_{i+1} \), we will extend \( \tau_i \) over \( D_{i+1} \) by stating that:
56
\[ \forall x^{i+1}(x^{i+1} \notin D(\tau_i) \Leftrightarrow \omega = \tau_i(x^{i+1})) \]
assuming that the domain of \( \tau_i \) is decidable. We will also assume that the 'undefined' element \( \omega \) is such that: \( \omega \notin D_i \) and that for each predicate \( \psi : D_i \rightarrow \{T, F\} \),
\[ \psi : D_i \times D_i \rightarrow \{T, F\} \text{ etc.} \]
if it is:
\[ \phi(\alpha) = F; \forall x \forall y(\psi(x, y) \land \psi(\alpha, y) = \psi(\alpha, y) = F) \text{ etc.} \]
In this way, results about total functions \( \tau_i \) will also hold for partial functions \( \tau_i \) extended in the above fashion.
**Definition 4:**
P\(^{i+1}\) represents \( P_i \) wrt the representation function \( \tau_i \), if and only if there exist two predicates \( \phi^{i+1}(x^{i+1}) \) and \( \psi^{i+1}(x^{i+1}, z^{i+1}) \), wth \( P^{i+1} \) is totally correct and such that:
\[
(a) \ \forall x^{i+1} \exists z^{i+1}(\phi^{i+1}(x^{i+1}) \land \psi^{i+1}(x^{i+1}, z^{i+1}))
\]
\[
(b) \ \forall x^{i+1} \forall z^{i+1}(\phi^{i+1}(x^{i+1}, z^{i+1}) \land \psi^{i+1}(x^{i+1}, z^{i+1})) \Rightarrow \tau_i(z^{i+1})
\]
The preceding (a) and (b) formulae express the relation which must exist, by means of \( \tau_i \), between the input conditions of \( P_i \) and those of \( P^{i+1} \) (expression (a)) and between the input/output relations which must be satisfied by \( P_i \) and \( P^{i+1} \) respectively (expression (b)). Such relations correspond to that required in Milner (1971) for simulation.
In Definition 4, \( \phi \) and \( \psi \) are given predicates and the existence of \( \phi^{i+1} \) and of \( \psi^{i+1} \), wth \( P^{i+1} \) be totally correct, is required. It can be seen that the problem of deciding whether \( P^{i+1} \) represents \( P_i \) wrt \( \tau_i \), stated in the above fashion, is a second-order problem; however, from our constructive point of view, this problem is irrelevant.
The following result can be immediately proved.
**Theorem 1:**
If \( P_i \) represents \( P_i \) wrt the representation function \( \tau_i \) and if \( P_i \) represents \( P_2 \) wrt the representation function \( \tau_2 \), then \( P_3 \) represents \( P_i \) wrt the representation function \( \tau_i \).
In the following section we will show how to pass from a program \( P \) over \( D_i \) to a program \( P^{i+1} \) over \( D_{i+1} \) in such a way that \( P^{i+1} \) represents \( P_i \) wrt a given representation function \( \tau_i \); the transitivity of the representation relation given by the above result guarantees that, passing from \( P_i \) to \( P^{i+1} \) (for \( 1 \leq i \leq t \)), we are allowed to ignore what was previously done in passing from one to the other of the preceding levels.
### 3. Building \( P^{i+1} \) from \( P_i \)
To abbreviate notation we will examine in this section only programs \( P_i \) and \( P_2 \) of the sequence \( P_1, P_2, \ldots, P_t \); however, the following is valid for any \( P_i \) and \( P^{i+1} \). The construction of \( P^{i+1} \) from \( P_i \) is done in the following way:
1. **Statements of \( P_i \) are numbered.** For example, let \( P_i \) be the following program (on first-level structure \( \langle D_i, F_i, T_i \rangle \)):

In what follows, for simplicity, we will refer to program \( P_i \) above.
2. **Statements 0, 1, 2, 3 are considered as the following elementary programs \( P_{0_i}, P_{1_i}, P_{2_i}, P_{3_i} \) respectively:**

The dummy variables \( y_j (1 \leq j \leq 3) \) have been introduced to express formally the input and output predicates of programs \( P_i (0 \leq j \leq 3) \). Such predicates are clearly the following:
- for \( P_{0_i} \): \( x^i \in D(f_0^i) \) (input predicate); \( y^i = f_0^i(x^i) \) (output predicate);
- for \( P_{1_i} \): \( y^i_1 \in D(p^i_1) \) (input); \( y^i_2 = y^i_1 \wedge p^i_1(y^i_1) \) (output of True path);
- for \( P_{2_i} \): \( y^i_1 \in D(f_2^i) \) (input); \( y^i_2 = y^i_1 \wedge \neg p^i_2(y^i_1) \) (output of False path);
- for \( P_{3_i} \): \( y^i_1 \in D(f_2^i) \) (input); \( z^i_1 = f_2^i(y^i_1) \) (output).
3. Each elementary program \( P_i \) is expanded into a corresponding program \( P^i \) on \( \langle D_2, F_2, T_2 \rangle \). The structure \( \langle D_2, F_2, T_2 \rangle \) is defined by the programmer's choices and the relation between \( D_i \) and \( D_2 \) is stated by the choice and construction of the representation function \( \tau_i : D_i \rightarrow D_2 \).
This relation constitutes a guide to the construction of expansions (in a sense that will be made more precise in the appendix) by means of the following predicates which assure correctness of step 3. (We postpone the treatment of this subject to the next section, so as not to interrupt the present exposition):
For expansion \( P^0_i \):

For expansion \( P^1_i \):

Analogously for predicates \( \phi^0_i, \phi^1_i, \phi^2_i, \phi^3_i \); the dummy variables \( y^1_i, y^2_i \) etc. have been introduced in order to formally express the input and output predicates of the expansions, which are therefore programs of the following type:

4. The various expansions of the single statements of $P^1$ are merged into one program $P^2$ as follows:
(a) the exit of $P^2_3$ is connected to the entrance of $P^2_1$, eliminating the dummy assignments $y_1^2 \leftarrow y^2$ and $y^2 \leftarrow y_1^2$; such an elimination is symbolically indicated by:
(b) the exit of $P^2_1$ is connected to the entrance of $P^2_1$ in the same way;
(c) the exit of $P^2_1$ is connected to the entrance of $P^2_2$ by the elimination:
(d) the exit of $P^2_1$ is connected to the entrance of $P^2_3$ by the elimination:
A program $P^2$ with input variables $x^2$, program variables $y^2$ and output variables $z^2$ is thus obtained.
4. Conditions of correct expansion
Note that:
(a) If block expansions are such as to satisfy the correctness conditions $\phi^2_j$, $\psi^2_j$, the obtained program $P^2$ represents $P^1$, and therefore one is sure not to have overlooked any connection between the original blocks (in the opposite case, errors can arise, since the domains of the various blocks are different);
(b) On the other hand, in practice a strict requirement like this can lead in particular cases (especially in programs of great size) to a superfluous increase in the number of variables and/or instructions;
(c) To proceed in a more flexible way, it is preferable to adopt the condition that each block expansion be a representation of the block itself, according to Definition 4. However, this will also imply verification of interface conditions (i.e. that each exit from one expansion is accepted as input by the next one).
We will then show, first, what are the interface conditions mentioned in point (c) above; as a consequence we will demonstrate the validity of statement (a). Here also we refer to expansion of program $P^1$ given in the previous section.
Definition 5:
Program $P^2_3$ is a correct expansion, wrt $\tau_1$, of statement $j$ of $P^1$ if and only if $P^2_3$ represents the elementary program $P^1_j$ wrt $\tau_1$, according to Definition 4.
Definition 6:
Given that $\phi^2_j$, $\psi^2_j (0 \leqslant j \leqslant 3)$ are the correctness predicates assigned for the expansions $P^2_j$ (they represent the elementary programs $P^1_j$ wrt $\tau_1$), the interface conditions between expansions are the following:
(a) interface between $P^2_3$ and $P^2_1$:
$$C_j(\phi^2_j, \psi^2_j, \phi^2_3) = \forall x^1 \forall x^2 \forall y_1^2 (x^1 \in D(f^j_3) \land f^j_1(x^1) \in D(P^1_3)) \land$$
$$\Rightarrow [x^1 = \tau_1(x^2) \land \phi^2_3(x^2) \land \psi^2_3(y^2, y_1^2) \Rightarrow \phi^2_j(y_1^2)] .$$
(b) interface between $P^2_3$ and $P^2_1$:
$$C_j(\phi^2_3, \psi^2_3, \psi^2_1) = \forall y_1^3 \forall y_2^3 \forall y_2^2 (y_1^3 \in D(P^1_3) \land$$
$$p^2_j(y_1^3) \land y_1^1 \in D(f^j_2)) \Rightarrow$$
$$[x^1 = \tau_1(y^2) \land \phi^2_j(y_1^3) \land \psi^2_3(y^2, y_1^2) \Rightarrow \psi^2_3(y_1^3)] .$$
The interface conditions between $P^2_1$ and $P^2_3$ and between $P^2_2$ and $P^2_3$ are obtained in a similar way.
The following theorem can be proved, by induction on the execution sequences of $P_i$ (for the definition of execution sequence see for instance Manna, 1969b).}
**Theorem 2:**
If program $P^1$ is totally correct wrt the input predicate $\phi^1(x^1)$ and the output predicate $\psi^1(x^1, z^2)$; if all the expansions $P^2_j$ of the statements $j$ of program $P^1$ are "correct expansions" in the sense of Definition 5; if all the interface conditions $C_j(\phi^2_j, \psi^2_j, \phi^2_3)$, $C_j(\phi^2_3, \psi^2_3, \psi^2_1)$ etc. are true; and if $P^2$ is constructed connecting to each other the different expansions $P^2_j$ as stated in Point 4 of previous section; then $P^2$ is totally correct wrt the input predicate $\phi^1(\tau_1(x^2)) \land \phi_0^2(x^2)$ and the output predicate $\psi^1(\tau_1(x^2), \tau_1(z^2)).$
**Corollary 1:**
$P^2$ represents $P^1$ wrt $\tau_1$.
The proof of corollary 1 is immediate, when considering that:
(a) $\phi^1$ is the input predicate of $P^1$, for which the condition:
$$\forall x^1 (\phi^1(x^1) \Rightarrow x^1 \in D(f^j_3))$$
must hold;
(b) $\phi^2_3$ is a correct expansion of $P^2$, for which the condition:
$$\forall x^1 \exists x^2 (x^1 \in D(f^j_3) \Rightarrow (x^1 = \tau_1(x^2) \land \phi^2_3(x^2)))$$
must hold (because of Definitions 4 and 5).
Considering that the program $P^1$, to which we have referred till now, contains every type of statement, the expansion procedure, Theorem 2 and Corollary 1 given above hold in general. Since the representation relation is transitive (Theorem 1), the following corollary holds.
**Corollary 2:**
If in the sequence $P^1, P^2, \ldots, P^t$ of programs, $P^{t+1}$ is obtained from $P^i$ ($1 \leqslant i \leqslant t - 1$) according to Steps 1, 2, 3, 4 given...
above, then \( \tau_i \) being the representation function \( \mathcal{D}_{i+1} \to \mathcal{D}_i \),
P' represents \( P \) wrt the representation function:
\[
\tau_1 \cdot \tau_2 \cdot \ldots \cdot \tau_{t-1} : \mathcal{D}_t \to \mathcal{D}_t .
\]
The correctness predicates of \( P' \) are of the form:
- input predicate: \( \varphi^1(t_1, t_2, \ldots, t_{t-1}(x^i)) \land \varphi^2(x^i) \)
- output predicate: \( \varphi^4(t_1, t_2, \ldots, t_{t-1}(x^i), t_1, t_2, \ldots, t_{t-1}(z^j)) \)
where \( \varphi^2(x^i) \) is a predicate which depends on the \( t-1 \) refinements made; for example, for \( t = 3, \varphi^2(x^i) = (\varphi^3(t_2(x^i)) \land \varphi^4(x^i)) \).
Some remarks are at this point opportune with regard to the validity of Theorem 2:
(a) we have now to prove that if expansions are totally correct wrt the predicates \( \varphi^2, \varphi^2 \), then verification of the interface conditions is not necessary. In fact, let
\[
\begin{align*}
C_3(\varphi^2) & \iff \forall \gamma^1(\tau_i(\gamma^2)) \in D(p^2) \Rightarrow \varphi^2(\gamma^2) \\
C_4(\varphi^2) & \iff \forall \gamma^1(\gamma^2) \in D(p^2) \land f^1(\tau_i(\gamma^2)) \in D(p^2) \\
& \Rightarrow (f^1(\gamma^2) = \tau_i(\gamma^2) \Rightarrow \varphi^2(\gamma^2))
\end{align*}
\]
then
\[
C_3(\varphi^2) \implies C_4(\varphi^2),
\]
and if \( p^2 \) is totally correct wrt \( \varphi^2 \) and \( \varphi^2 \), then
\[
C_3(\varphi^2) \iff C_1(\varphi^2, \varphi^2, \varphi^2) .
\]
and (analogously for expansions of the other blocks). On the other hand, conditions \( C_3 \) are automatically satisfied when the expansions are correct wrt the predicates \( \varphi^2, \varphi^2 \); therefore, in this case, all the conditions of Theorem 2 are verified. Note that \( C_3 \) and \( C_4 \) are sufficient conditions weaker than \( C_1 \) and it is often easier to verify them.
(b) \( C_1 \) also are sufficient but not necessary for the validity of the theorem. Necessary conditions can be determined on the basis of \( \varphi^2, \varphi^2 \) only when these express a functional connection, that is, such that: \( \forall \gamma^1(\gamma^2) \Rightarrow \varphi^2(\gamma^2(x, z)) \); in general the connection is a relation because of the constructive character of the procedure, i.e., the fact that the correctness conditions \( \varphi^2, \varphi^2 \) of the expansions are determined before making the block expansions themselves.
5. Example
In this example we will drop some of the assumptions about formalism that were used in the previous exposition. Specifically, instead of considering domains \( \mathcal{D}_1, \mathcal{D}_2, \ldots, \mathcal{D}_n \), we will consider several variables of different types, over \( \mathcal{D}_1, \mathcal{D}_2, \) and the representation function will therefore also express the relationship between those variables. The necessary modifications will be specified as they appear.
1. First-level program \( P^1 \)
The problem is the following: 'A symmetric matrix with positive or null elements has to be multiplied by itself until the maximum of its elements is greater than or equal to an assigned number \( \alpha \in R^+ \) (\( R^+ \) is the set of positive reals).
The minimum requirement to do this is that in the space \( S \) of symmetric matrices \( N \times N \) an internal product is defined and that with each matrix \( x \in S \) a number \( ||x|| \in R^+ \) (max of elements) is uniquely associated, to be compared with \( \alpha \in R^+ \).
The first-level structure is therefore:
\[
\mathcal{S} = \langle S \cup R^+, \{\|\cdot\|, \cdot, \geq\} \rangle
\]
with the following properties:
1. \( R^+, \geq \) is the structure of positive or null reals with the ordering \( \geq \);
2. \( S, S_1, S_2 \subseteq S \); \( S_1 \subseteq S_1 \subseteq S \);
3. \( \forall s \in S: ||s|| \in R^+ \);
4. a non-empty subset \( S \subseteq S \) exists, such that:
\[
\forall s \in S: \lim_{n \to \infty} ||s^n|| > \alpha .
\]
Point 4 corresponds to the requirement of a non-empty solution of the problem.
Let's now consider the following program \( P^1 \) on \( \mathcal{S} \):
\[
\text{START}
\]
0: (a, b, c) ← (x1, x1, [1\times1])
1: c > x
2: (z1, z2) ← (a, c)
3: (c) ← (a, b, [a\cdot b])
HALT
where \( (x1), (a, b, c), (z1, z2) \) are input, program and output variables, respectively.
The properties of structure \( \mathcal{S} \), given above are sufficient to demonstrate that \( P^1 \) is totally correct with respect to the following predicates:
\[
\varphi^1(x1) \equiv x1 \in S_1;
\]
\[
\varphi^1(x1, z1, z2) \equiv 3m(z1 = x1^m \land z2 = ||x1|| \land m(1 \leq m < n \Rightarrow ||x1^m|| < n) \land z2 > x)
\]
\[
(n, m, n \in R^+, \eta \text{ set of natural numbers})
\]
Space \( S \) of symmetric matrices over \( R^+ \) satisfies the properties of structure \( \mathcal{S} \), and the predicates \( \varphi^1 \) and \( \varphi^1 \) formally express the requirements of the problem definition; in addition, \( P^1 \) is easily understandable and can therefore be considered a good assessment of the problem. Nothing is said, at the first level, about how functions \( \cdot \) and \( ||\cdot|| \) will be realised; this will depend on how matrices will be represented in core memory. However, we are now sure, since the properties required for \( \mathcal{S} \), hold, that the problem has a solution, and an outline of it is \( P^1 \). The implied variables are of different types, i.e., \( x1, a, b, z1, \) are individual variables over \( S \) and \( x1, \) are individual variables over \( R^+ \). The domains of the various blocks are therefore different, and we have to take this into account not only in the verification, but also in the expansion process; to achieve this, we can make use of the following input/output conditions for the elementary programs \( P^1 (0 \leq i \leq 3) \):
\[
\varphi^i_0(x1) \equiv (x1 \in S_i); \varphi^i_0(x1, a1, b1, c1) \equiv (a1 = b1 = x1 \land c1 = ||x1||)
\]
\[
\varphi^i_0(a1, b1, c1) \equiv (c1 \in R^+);
\]
\[
\varphi^i_0(a1, b1, c1, a2, a2, c2) \equiv (a1 = a2, b1 = c2 > a);
\]
\[
\varphi^i_0(a1, b1, c1, a2, b2, c2) \equiv (a1 = a2, b1 = c2 \land a1 > a);
\]
\[
\varphi^i_0(a2, b2, c2) \equiv (a2 \in S \land c2 \in R^+);
\]
\[
\varphi^i_0(a2, b2, c2, z1, z2) \equiv (z1 = a2 \land z2 = c2);
\]
\[
\varphi^i_0(a3, b3, c3) \equiv (a3, b3, c3 \in S \land c3 \in R^+);
\]
\[
\varphi^i_0(a3, b3, c3, a1, b1, c1) \equiv (a1 = a3, b2 \land b1 = b3 \land c1 = ||a1||)
\]
We can now deal with the representation of data in the expansions of \( P^1 \).
2. The representation function
To save memory space, only elements \( s(i, j) \) with \( i \leq j \) are stored in a vector \( v \). This can be done by stating the following correspondence between the matrix indices and the vector indices:
Such a correspondence is assigned by the function:
(a) \( f(l) = (i,j) \), with \( j = \max \{k \mid k(k-1)/2 < l \} \)
and \( i = \left\lfloor j(\frac{1}{2} - 1) \right\rfloor \);
(b) \( f^{-1}(i,j) = l \), with \( l = j(\frac{1}{2} - 1) + i \) for \( i \leq j \)
\( f^{-1}(i,j) = f^{-1}(j,i) \) for \( i > j \).
Then there exists a one-to-one correspondence between the space \( V \) of vectors of dimensions \( N(N+1)/2 \) and the space \( S \) of symmetric matrices \( N \times N \):
\( \pi : V \leftrightarrow S \), which is assigned as follows:
\( \forall v \in V(s = \pi(v) \iff (f(l)) = v(l) \text{ for } 1 \leq l \leq N(N+1)/2) \);
\( \forall s \in S(v = \pi^{-1}(s) \iff (f^{-1}(i,j)) = s(i,j) \text{ for } 1 \leq i \leq j \text{ and } 1 \leq j \leq N) \).
\( \pi \) has the following properties:
5. \( \forall v(v \in V \iff \pi(v) \in S) \);
6. \( \forall v_1, v_2 \in V(\pi(v_1) = \pi(v_2) \iff v_1 = v_2) \).
The second-level domain is therefore: \( \mathcal{D}_2 = V \cup R^+ \). We will consider for \( P^2 \) the following variables:
- \( x_j \), \( i, j, \ldots \) input variables;
- \( e_j, d, i, j, \ldots \) program variables;
- \( z_{j, k} \), \( j, k \) output variables.
\( i, j, \ldots \) are variables over \( \eta \) and have been introduced in order to construct the operations acting on single elements of vectors \( v \in V \). Their final number will be known only at the end of the expansion construction and is unimportant at this point since only \( x_2, e, d, z_2, z_2 \) represent the first-level variables \( x_1, a, b, c, z_1 \) in the way specified by the following functions:
\( \tau_0(x_2, i, j, \ldots) = x_1 \) if and only if \( x_1 = \pi(x_2) \);
\( \tau_0(e, d, i, j, \ldots) = (a, b, c) \) if and only if \( a = \pi(e), b = \pi(d) \text{ and } d = c \);
\( \tau_0(z_{1,1}, z_{2,2}, i, \ldots) = (z_{1,1}, z_{2,2}) \) if and only if \( z_{1,1} = \pi(z_{2,1}) \land z_{2,2} = \pi(z_{2,2}) \).
\( \tau_i \) connects the input variables, \( \tau_p \) the program variables, \( \tau_o \) the output variables. It is necessary to distinguish between \( \tau_i, \tau_p, \tau_o \) since the input, program and output variables are not the same; however, both for \( P^1 \) and \( P^2 \), the program variables are the same for all the blocks and therefore the formulae given in the previous sections still hold.
3. The correctness predicates
Predicates \( \phi_j^2, \psi_j^2 \) \( 0 \leq j \leq 3 \) are obtained from \( \phi_j \) and \( \psi_j \) by means of \( \tau_i, \tau_p, \tau_o \) in the following way:
\[ \phi_0^2(x_2, i, \ldots) = \phi_0(\tau_2(x_2, i, \ldots)) \equiv (x_2 \in V) \]
(on the basis of property 5 of \( \pi \));
\[ \psi_0^2(x_2, i_0, e_1, f_1, d_1, i_1, \ldots) = \psi_0(\tau_2(x_2, i_0, \ldots), \tau_p(e_1, f_1, d_1, \ldots)) \]
\[ \equiv (e_1 = f_1 = x_2 \land d_1 = \pi(x_2)) \]
(on the basis of properties 5 and 6 of \( \pi \)).
Analogously:
\[ \phi_1^2(e_1, f_1, d_1, i_1, \ldots) \equiv d_1 \in R^+; \]
\[ \psi_1^2(e_1, f_1, d_1, i_1, \ldots, e_2, f_2, d_2, i_2, \ldots) \equiv e_1 = e_2 \land f_1 = f_2 \land d_1 = d_2 \land d_2 > \alpha; \]
\[ \psi_1^2(e_1, f_1, d_1, i_1, \ldots, e_3, f_3, d_3, i_3, \ldots) \equiv e_1 = e_3 \land f_1 = f_3 \land d_1 = d_3 \land d_3 < \alpha; \]
\[ \phi_2^2(e_2, f_2, d_2, i_2, \ldots) \equiv e_2 \in V \land d_2 \in R^+; \]
\[ \psi_2^2(e_2, f_2, d_2, i_2, \ldots, z_{2,1}, z_{2,2}) \equiv z_{2,1} = e_2 \land d_2 = z_{2,2}; \]
\[ \phi_3^2(e_3, f_3, d_3, i_3, \ldots) \equiv e_3, f_3, d_3 \in V \land d_3 \in R^+; \]
\[ \psi_3^2(e_3, f_3, d_3, i_3, \ldots, e_1, f_1, d_1, i_1, \ldots) \equiv (e_1 = e_3 \land f_1 = f_3 \land d_1 = \pi(e_3)). \]
4. Construction of expansions
Expansions \( P_j^2 \) are made on the structure:
\[ \mathcal{D}_2 = \langle V \cup R^+, \{+ \cup \cdot \cup \text{Max} \cup ./ \cup (\cdot v) \rangle, \{0 \cup \infty \} \rangle \]
where \( +, \cup, \cdot, \text{Max}, /, ., \cdot v \) are defined in the usual fashion in \( R^+ \). \( v(l) \) is the extraction of the \( l \)th component of vector \( v \).
We have to express the predicates \( \phi_j^2 \) and \( \psi_j^2 \) on the structure \( \mathcal{D}_2 \); this will constitute a guide to construction of expansions \( P_j^2 \). The only terms that appear in the above predicates \( \phi_j^2 \) and \( \psi_j^2 \) not already expressed on \( \mathcal{D}_2 \) are:
(I) \( \|v(x)\| \); (II) \( \tau(e_1) = \tau(e_3) \cdot \tau(f_3) \); (III) \( \tau(e_1) \).
Besides, having defined \( \|s\| = \text{Max} \{s(i,j) \mid 1 \leq i \leq N \text{ and } 1 \leq j \leq N \} \), the expression
\( \|\tau(v)\| = \text{Max} \{v(l) \mid 1 \leq l \leq N(N+1)/2 \} \)
holds for every vector \( v \) of dimension \( N(N+1)/2 \).
As for the relation \( \Pi \) between \( e_1, e_3, f_3, \) we have to find a vector product \( \otimes \), expressed in terms of the components and such that:
7. \( \forall v_1, v_2 \in V(\tau(v_1 \otimes v_2) = \tau(v_1) \cdot \tau(v_2)) \).
In fact, in such a case \( \tau(e_1) = \tau(e_3) \cdot \tau(f_3) \) can be expressed as \( e_1 = e_3 \otimes f_3 \).
8. \( s_{12}(i,j) = \sum_k s_1(i, k) \cdot s_2(k, j) \) \( (s_{12} = s_1 \cdot s_2) \).
9. \( \tau^{-1}(s_1, s_2) = \tau^{-1}(s_2, s_1) \).
10.\( \forall v_1(l-1)/2 + i) = \sum_{k=1}^i v_1(l-1)/2 + k \cdot v_2(l-1)/2 + k \cdot i \) \( + \sum_{k=1}^N v_2(k-1)/2 + i \) \( + \sum_{k=1}^N v_2(k-1)/2 + i \).
\( v_4(k-1)/2 + j) \) \( \forall 1 \leq i \leq N \text{ and } 1 \leq j \leq N \).
11. \( \|v(x)\| = \text{Max} \{v(l) \mid 1 \leq l \leq N(N+1)/2 \} \).
12. \( e_1(l-1)/2 + i) = \sum_{k=1}^i e_1(l-1)/2 + k \cdot f_1(l-1)/2 + k \cdot i \) \( + \sum_{k=1}^N e_1(l-1)/2 + i \) \( + \sum_{k=1}^N e_1(l-1)/2 + i \).
The expansions \( P_j^2 \) correct wrt \( \phi_j^2 \) and \( \psi_j^2 \) are therefore the following:
The variables $i, j, \ldots$ don’t appear in $P_1^2$ and $P_2^2$ because no property is required of them, and they are not modified (the same for $i, j, f$ in $P_2^3$). These variables could appear in $P_1^2, P_2^2$ with arbitrary assignments, but this is unnecessary.
Finally, merging the expansions $P_1^2$ (according to step 4 of Section 3) the second-level program $P^2$ is obtained, correct wrt the following predicates (because of what is stated in Section 4):
$$\varphi_2^2: \tau(x, z_2) \in S_k$$
$$\psi_2^2: 3n[(\tau(x_2)) = \tau(x_2)^n \wedge z_2 = \|x_2\| \wedge x \wedge m \leq n \Rightarrow \|x_2\|^m < x \wedge z_2 \geq n] .$$
The predicates $\varphi_2^2$ and $\psi_2^2$, the representation functions $t_1, t_2, t_3$ and the first-level program $P^1$ constitute a documentation of the program $P^2$ and of the choices made for its construction, i.e.:
(a) the input/output behaviour of $P^2$ is expressed, by means of $t$, on the basis of that of $P^1$, e.g. the input vector $x_2$ corresponds to the matrix $x_1$ in the way specified by $t(x_2) = x_1$ and the output vector $z_2$ corresponds to the matrix $z_2 = x_1^n$ in the way specified by $t(z_2) = t(x_2)^n$;
(b) $t_1, t_2$ indicate how to supply the input and output data to obtain the initial requirement made in $P^1$.
In this example the interface conditions didn’t appear since the expansions were correct wrt the predicates $\varphi_2^2, \psi_2^2$. If, for instance, the expansion of block 2 was called (as a subroutine) by several points of a more complex program with different matrix dimensions, instead of constructing different pairs of predicates $\varphi_2^2, \psi_2^2$ for each call, it would be necessary to assign to the subroutine a unique pair of correctness predicates which contain the matrix dimension as a parameter. In this and other practical cases, therefore, verification of interface conditions becomes necessary.
Appendix
With the following remarks, we want here to make more precise what we intended in the previous sections for 'constructive character' of the described method:
(a) $P^1$ is a program on a structure $S_1 = \langle D_1, F_1, \mathcal{P}_1 \rangle$; correctness predicates and other assertions about program $P^1$ must contain terms referring to structure $S_1$. Formally, they must be expressed in a theory $C_1$ of which $S_1$ is a model;
(b) In the same way, $P^2$ is a program on a structure $S_2 = \langle D_2, F_2, \mathcal{P}_2 \rangle$, thus containing terms referring to $S_2$. The properties of the representation function $t_1: D_2 \rightarrow D_1$ and the predicates $\varphi_2^2, \psi_2^2$ containing $t_1$ must be given in terms that appear both in $S_1$ and $S_2$. Formally, $\varphi_2^2, \psi_2^2$ must be expressed in a theory $C_{12}$ of which a suitable structure $S_{12}$ (union of $S_1$ and $S_2$ and containing $t_1$) is a model;
(c) If a pair of predicates $\varphi_2^2, \psi_2^2$ containing only terms of $S_2$ and equivalent to $\varphi_2^2, \psi_2^2$ in $C_{12}$ can be found, then this constitutes a guide to construction of expansions $P_2^3$ (as in the example). Theoretically, this is related to the program synthesis problem, stated as follows (Constable,
Let \( \varphi \) and \( \psi \) be two predicates expressed in a theory \( \mathcal{C} \); is it possible, by proving proposition
\[ \mathcal{C} \vdash \forall x (\varphi(x) \Rightarrow \exists z (\psi(x, z))) \]
(\( \vdash \) means 'provable in \( \mathcal{C} \)'), to build a partial recursive function \( f \) such that:
1. \( D(f) = \{ \xi \mid \mathcal{C} \vdash \varphi(\xi) \} \) (\( \vdash \) means 'valid in \( \mathcal{C} \))
2. if \( \xi \in D(f) \) and \( \eta = f(\xi) \), then \( \mathcal{C} \vdash \psi(\xi, \eta) \)?
Obviously, the answer depends on theory \( \mathcal{C} \); it is positive for Kleene's intuitionistic theory of natural numbers (see the equivalence between predicates \( \varphi^2 \), \( \psi^2 \) and predicates \( \varphi^2 \), \( \psi^2 \) verified, it is also verified that:
3. \( \mathcal{C}_2 \vdash \forall x (\varphi^2(x^2) \Rightarrow \exists z (\psi^2(x^2, z^2))) \). In some cases the proof of 3. can be reduced into a proof of:
4. \( \mathcal{C}_2 \vdash \forall x (\varphi^2(x^2) \Rightarrow \exists z (\psi^2(x^2, z^2))) \); if in \( \mathcal{C}_2 \) the synthesis is possible, the related techniques can be applied. In any case, the construction of \( \varphi^2 \), \( \psi^2 \) can be a useful guide both for verification of formula 4. and for construction of expansions \( P^2_2 \).
References
|
{"Source-Url": "https://watermark.silverchair.com/180055.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAkQwggJABgkqhkiG9w0BBwagggIxMIICLQIBADCCAiYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMx-qfDP620fR6tnQfAgEQgIIB94tjQ93MOSqpIFwyVIWqrgGmzh-zuAm3zoBeAdpPGokwNPFxmTClJ7ezCrkTpq-AqDWEbEWws57LLpzzYExpf-FzzVc7Iy_D8s-HyxVRqkhJwmXEeSTITyEMdP2iCxCwDMek5Xn8xnIEIKToJ5C5qzUMpmOUNn_wWJ47JxzDJJesGEDOEjTTuYXb4zI_ZsZRFypfUY6HLiQuSNG6k8WG3YJUn2JR0MCcIcvbroEX9xghc6O4LbfFt2G4BTkWhhzWRjEcuarsajXbQfJyYOVGXvyiXbtCij-vZjhRgLApKYdaZqHIJQJGr90lTgykZ9wniia3-RLqOra7c4X3Xrpex9a38nj8FfDHSVlQJcuwEj-kX5FHyaC1J-CLep_q7_VYELxxER7nCL44JStgKQUESM57S2FrCnFZqDkDlGhc-1kR_IUNnPMS_UucsceVMhKFs2SThWr0RKfmH-L2vbHhOEhN8gdS3hw5ervGYdf3vXGcolb6Yb8izDNzIGWYis7GkmO9QSzEoSX0WSdOp6ZY_KSWLup2gxDIRKV0bLrm1LKNDHTTsVVg8DFiO9LoKegK83yz_Cw-eQ-I6aBgES6IJppS-u1jmAX5fXlEEdsNpn4C7psABWQbZcPz1A7gJuSvEcne_kIdrAgIHN9k6aIWHhVVfLxX9Jnp", "len_cl100k_base": 13032, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 8968, "total-output-tokens": 15146, "length": "2e13", "weborganizer": {"__label__adult": 0.00035643577575683594, "__label__art_design": 0.00046324729919433594, "__label__crime_law": 0.00038552284240722656, "__label__education_jobs": 0.0018167495727539065, "__label__entertainment": 7.319450378417969e-05, "__label__fashion_beauty": 0.0001748800277709961, "__label__finance_business": 0.0003757476806640625, "__label__food_dining": 0.0004703998565673828, "__label__games": 0.0008230209350585938, "__label__hardware": 0.0010223388671875, "__label__health": 0.000701904296875, "__label__history": 0.0003287792205810547, "__label__home_hobbies": 0.0001647472381591797, "__label__industrial": 0.0005593299865722656, "__label__literature": 0.000545501708984375, "__label__politics": 0.0002841949462890625, "__label__religion": 0.0006241798400878906, "__label__science_tech": 0.06396484375, "__label__social_life": 9.60230827331543e-05, "__label__software": 0.00717926025390625, "__label__software_dev": 0.91845703125, "__label__sports_fitness": 0.00032448768615722656, "__label__transportation": 0.0006403923034667969, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43245, 0.04039]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43245, 0.74029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43245, 0.82677]], "google_gemma-3-12b-it_contains_pii": [[0, 6465, false], [6465, 13309, null], [13309, 18619, null], [18619, 23405, null], [23405, 30206, null], [30206, 36008, null], [36008, 39222, null], [39222, 43245, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6465, true], [6465, 13309, null], [13309, 18619, null], [18619, 23405, null], [23405, 30206, null], [30206, 36008, null], [36008, 39222, null], [39222, 43245, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43245, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43245, null]], "pdf_page_numbers": [[0, 6465, 1], [6465, 13309, 2], [13309, 18619, 3], [18619, 23405, 4], [23405, 30206, 5], [30206, 36008, 6], [36008, 39222, 7], [39222, 43245, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43245, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
c2ab9d4e1ada87773f5d2a094fee47bf64c4a00f
|
The Session Initiation Protocol (SIP) Refer Method
Status of this Memo
This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2003). All Rights Reserved.
Abstract
This document defines the REFER method. This Session Initiation Protocol (SIP) extension requests that the recipient REFER to a resource provided in the request. It provides a mechanism allowing the party sending the REFER to be notified of the outcome of the referenced request. This can be used to enable many applications, including call transfer.
In addition to the REFER method, this document defines the refer event package and the Refer-To request header.
Table of Contents
1. Overview ......................................................... 2
2. The REFER Method ................................................. 3
2.1 The Refer-To Header Field ................................. 3
2.2 Header Field Support for the REFER Method ............. 4
2.3 Message Body Inclusion .................................... 5
2.4 Behavior of SIP User Agents ............................... 6
2.4.1 Forming a REFER request ............................. 6
2.4.2 Processing a REFER request .......................... 6
2.4.3 Accessing the Referred-to Resource ................ 6
2.4.4 Using SIP Events to Report the Results of the Reference ....................... 7
2.4.5 The Body of the NOTIFY ............................... 8
2.4.6 Multiple REFER Requests in a Dialog ............... 9
2.4.7 Using the Subscription-State Header Field with Event Refer .................. 9
Sparks Standards Track
This document defines the REFER method. This SIP [1] extension requests that the recipient REFER to a resource provided in the request.
This can be used to enable many applications, including Call Transfer. For instance, if Alice is in a call with Bob, and decides Bob needs to talk to Carol, Alice can instruct her SIP user agent (UA) to send a SIP REFER request to Bob’s UA providing Carol’s SIP Contact information. Assuming Bob has given it permission, Bob’s UA will attempt to call Carol using that contact. Bob’s UA will then report whether it succeeded in reaching the contact to Alice’s UA.
2. The REFER Method
REFER is a SIP method as defined by RFC 3261 [1]. The REFER method indicates that the recipient (identified by the Request-URI) should contact a third party using the contact information provided in the request.
Unless stated otherwise, the protocol for emitting and responding to a REFER request are identical to those for a BYE request in [1]. The behavior of SIP entities not implementing the REFER (or any other unknown) method is explicitly defined in [1].
A REFER request implicitly establishes a subscription to the refer event. Event subscriptions are defined in [2].
A REFER request MAY be placed outside the scope of a dialog created with an INVITE. REFER creates a dialog, and MAY be Record-Routed, hence MUST contain a single Contact header field value. REFERs occurring inside an existing dialog MUST follow the Route/Record-Route logic of that dialog.
2.1 The Refer-To Header Field
Refer-To is a request header field (request-header) as defined by [1]. It only appears in a REFER request. It provides a URL to reference.
Refer-To = ("Refer-To" / "r") HCOLON ( name-addr / addr-spec ) *(SEMI generic-param)
The following should be interpreted as if it appeared in Table 3 of RFC 3261.
<table>
<thead>
<tr>
<th>Header field</th>
<th>where</th>
<th>proxy</th>
<th>ACK</th>
<th>BYE</th>
<th>CAN</th>
<th>INV</th>
<th>OPT</th>
<th>REG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Refer-To</td>
<td>R</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
The Refer-To header field MAY be encrypted as part of end-to-end encryption.
The Contact header field is an important part of the Route/Record-Route mechanism and is not available to be used to indicate the target of the reference.
Examples
Refer-To: sip:alice@atlanta.example.com
Refer-To: <sip:bob@biloxi.example.net?Accept-Contact=sip:bobsdesk.biloxi.example.net&Call-ID=3D55432%40alicepc.atlanta.example.com>
Refer-To: <sip:dave@denver.example.org?Replaces=12345%40192.168.118.3%3Bto-tag%3D12345%3Bfrom-tag%3D5FFE-3994>
Refer-To: <sip:carol@cleveland.example.org;method=SUBSCRIBE>
Refer-To: http://www.ietf.org
Long headers field values are line-wrapped here for clarity only.
2.2 Header Field Support for the REFER Method
This table adds a column to tables 2 and 3 in [1], describing header field presence in a REFER method. See [1] for a key for the symbols used. A row for the Refer-To request-header should be inferred, mandatory for REFER. Refer-To is not applicable for any other methods. The proxy column in [2] applies to the REFER method unmodified.
<table>
<thead>
<tr>
<th>Header</th>
<th>Where</th>
<th>REFER</th>
</tr>
</thead>
<tbody>
<tr>
<td>Accept</td>
<td>R</td>
<td>o</td>
</tr>
<tr>
<td>Accept</td>
<td>2xx</td>
<td>-</td>
</tr>
<tr>
<td>Accept</td>
<td>415</td>
<td>c</td>
</tr>
<tr>
<td>Accept-Encoding</td>
<td>R</td>
<td>o</td>
</tr>
<tr>
<td>Accept-Encoding</td>
<td>2xx</td>
<td>-</td>
</tr>
<tr>
<td>Accept-Encoding</td>
<td>415</td>
<td>c</td>
</tr>
<tr>
<td>Accept-Language</td>
<td>R</td>
<td>o</td>
</tr>
<tr>
<td>Accept-Language</td>
<td>2xx</td>
<td>-</td>
</tr>
<tr>
<td>Accept-Language</td>
<td>415</td>
<td>c</td>
</tr>
<tr>
<td>Alert-Info</td>
<td></td>
<td>-</td>
</tr>
<tr>
<td>Allow</td>
<td>Rr</td>
<td>o</td>
</tr>
<tr>
<td>Allow</td>
<td>405</td>
<td>m</td>
</tr>
<tr>
<td>Authentication-Info</td>
<td>2xx</td>
<td>o</td>
</tr>
<tr>
<td>Authorization</td>
<td>R</td>
<td>o</td>
</tr>
<tr>
<td>Call-ID</td>
<td>c</td>
<td>m</td>
</tr>
<tr>
<td>Call-Info</td>
<td></td>
<td>-</td>
</tr>
<tr>
<td>Contact</td>
<td>R</td>
<td>m</td>
</tr>
<tr>
<td>Contact</td>
<td>1xx</td>
<td>-</td>
</tr>
<tr>
<td>Contact</td>
<td>2xx</td>
<td>m</td>
</tr>
<tr>
<td>Contact</td>
<td>3-6xx</td>
<td>o</td>
</tr>
<tr>
<td>Content-Disposition</td>
<td>o</td>
<td></td>
</tr>
<tr>
<td>Content-Encoding</td>
<td>o</td>
<td></td>
</tr>
</tbody>
</table>
Table 1: Header Field Support
2.3 Message Body Inclusion
A REFER method MAY contain a body. This specification assigns no meaning to such a body. A receiving agent may choose to process the body according to its Content-Type.
2.4 Behavior of SIP User Agents
2.4.1 Forming a REFER request
REFER is a SIP request and is constructed as defined in [1]. A REFER request MUST contain exactly one Refer-To header field value.
2.4.2 Processing a REFER request
A UA accepting a well-formed REFER request SHOULD request approval from the user to proceed (this request could be satisfied with an interactive query or through accessing configured policy). If approval is granted, the UA MUST contact the resource identified by the URI in the Refer-To header field value as discussed in Section 2.4.3.
If the approval sought above for a well-formed REFER request is immediately denied, the UA MAY decline the request.
An agent responding to a REFER method MUST return a 400 (Bad Request) if the request contained zero or more than one Refer-To header field values.
An agent (including proxies generating local responses) MAY return a 100 (Trying) or any appropriate 4xx-6xx class response as prescribed by [1].
Care should be taken when implementing the logic that determines whether or not to accept the REFER request. A UA not capable of accessing non-SIP URIs SHOULD NOT accept REFER requests to them.
If no final response has been generated according to the rules above, the UA MUST return a 202 Accepted response before the REFER transaction expires.
If a REFER request is accepted (that is, a 2xx class response is returned), the recipient MUST create a subscription and send notifications of the status of the refer as described in Section 2.4.4.
2.4.3 Accessing the Referred-to Resource
The resource identified by the Refer-To URI is contacted using the normal mechanisms for that URI type. For example, if the URI is a SIP URI indicating INVITE (using a method=INVITE URI parameter for example), the UA would issue a new INVITE using all of the normal rules for sending an INVITE defined in [1].
2.4.4 Using SIP Events to Report the Results of the Reference
The NOTIFY mechanism defined in [2] MUST be used to inform the agent sending the REFER of the status of the reference. The dialog identifiers (To, From, and Call-ID) of each NOTIFY must match those of the REFER as they would if the REFER had been a SUBSCRIBE request.
Each NOTIFY MUST contain an Event header field with a value of refer and possibly an id parameter (see Section 2.4.6).
Each NOTIFY MUST contain a body of type "message/sipfrag" [3].
The creation of a subscription as defined by [2] always results in an immediate NOTIFY. Analogous to the case for SUBSCRIBE described in that document, the agent that issued the REFER MUST be prepared to receive a NOTIFY before the REFER transaction completes.
The implicit subscription created by a REFER is the same as a subscription created with a SUBSCRIBE request. The agent issuing the REFER can terminate this subscription prematurely by unsubscribing using the mechanisms described in [2]. Terminating a subscription, either by explicitly unsubscribing or rejecting NOTIFY, is not an indication that the referenced request should be withdrawn or abandoned. In particular, an agent acting on a REFER request SHOULD NOT issue a CANCEL to any referenced SIP requests because the agent sending the REFER terminated its subscription to the refer event before the referenced request completes.
The agent issuing the REFER may extend its subscription using the subscription refresh mechanisms described in [2].
REFER is the only mechanism that can create a subscription to event refer. If a SUBSCRIBE request for event refer is received for a subscription that does not already exist, it MUST be rejected with a 403.
Notice that unlike SUBSCRIBE, the REFER transaction does not contain a duration for the subscription in either the request or the response. The lifetime of the state being subscribed to is determined by the progress of the referenced request. The duration of the subscription is chosen by the agent accepting the REFER and is communicated to the agent sending the REFER in the subscription’s initial NOTIFY (using the Subscription-State expires header parameter). Note that agents accepting REFER and not wishing to hold subscription state can terminate the subscription with this initial NOTIFY.
2.4.5 The Body of the NOTIFY
Each NOTIFY MUST contain a body of type "message/sipfrag" [3]. The body of a NOTIFY MUST begin with a SIP Response Status-Line as defined in [1]. The response class in this status line indicates the status of the referred action. The body MAY contain other SIP header fields to provide information about the outcome of the referenced action. This body provides a complete statement of the status of the referred action. The refer event package does not support state deltas.
If a NOTIFY is generated when the subscription state is pending, its body should consist only of a status line containing a response code of 100.
A minimal, but complete, implementation can respond with a single NOTIFY containing either the body:
SIP/2.0 100 Trying
if the subscription is pending, the body:
SIP/2.0 200 OK
if the reference was successful, the body:
SIP/2.0 503 Service Unavailable
if the reference failed, or the body:
SIP/2.0 603 Declined
if the REFER request was accepted before approval to follow the reference could be obtained and that approval was subsequently denied (see Section 2.4.7).
An implementation MAY include more of a SIP message in that body to convey more information. Warning header field values received in responses to the referred action are good candidates. In fact, if the reference was to a SIP URI, the entire response to the referenced action could be returned (perhaps to assist with debugging). However, doing so could have grave security repercussions (see Section 5). Implementers must carefully consider what they choose to include.
Note that if the reference was to a non-SIP URI, status in any NOTIFYs to the referrer must still be in the form of SIP Response Status-Lines. The minimal implementation discussed above is
sufficient to provide a basic indication of success or failure. For example, if a client receives a REFER to a HTTP URL, and is successful in accessing the resource, its NOTIFY to the referrer can contain the message/sipfrag body of "SIP/2.0 200 OK". If the notifier wishes to return additional non-SIP protocol specific information about the status of the request, it may place it in the body of the sipfrag message.
2.4.6 Multiple REFER Requests in a Dialog
A REFER creates an implicit subscription sharing the dialog identifiers in the REFER request. If more than one REFER is issued in the same dialog (a second attempt at transferring a call for example), the dialog identifiers do not provide enough information to associate the resulting NOTIFYs with the proper REFER.
Thus, for the second and subsequent REFER requests a UA receives in a given dialog, it MUST include an id parameter[2] in the Event header field of each NOTIFY containing the sequence number (the number from the CSeq header field value) of the REFER this NOTIFY is associated with. This id parameter MAY be included in NOTIFYs to the first REFER a UA receives in a given dialog. A SUBSCRIBE sent to refresh or terminate this subscription MUST contain this id parameter.
2.4.7 Using the Subscription-State Header Field with Event Refer
Each NOTIFY must contain a Subscription-State header field as defined in [2]. The final NOTIFY sent in response to a REFER MUST indicate the subscription has been "terminated" with a reason of "noresource". (The resource being subscribed to is the state of the referenced request).
If a NOTIFY indicates a reason that indicates a re-subscribe is appropriate according to [2], the agent sending the REFER is NOT obligated to re-subscribe.
In the case where a REFER was accepted with a 202, but approval to follow the reference was subsequently denied, the reason and retry-after elements of the Subscription-State header field can be used to indicate if and when the REFER can be re-attempted (as described for SUBSCRIBE in [2]).
2.5 Behavior of SIP Registrars/Redirect Servers
A registrar that is unaware of the definition of the REFER method will return a 501 response as defined in [1]. A registrar aware of the definition of REFER SHOULD return a 405 response.
This specification places no requirements on redirect server behavior beyond those specified in [1]. Thus, it is possible for REFER requests to be redirected.
2.6 Behavior of SIP Proxies
SIP proxies do not require modification to support the REFER method. Specifically, as required by [1], a proxy should process a REFER request the same way it processes an OPTIONS request.
3. Package Details: Event refer
This document defines an event package as defined in [2].
3.1 Event Package Name
The name of this event package is "refer".
3.2 Event Package Parameters
This package uses the "id" parameter defined in [2]. Its use in package is described in Section 2.4.6.
3.3 SUBSCRIBE Bodies
SUBSCRIBE bodies have no special meaning for this event package.
3.4 Subscription Duration
The duration of an implicit subscription created by a REFER request is initially determined by the agent accepting the REFER and communicated to the subscribing agent in the Subscription-State header field’s expire parameter in the first NOTIFY sent in the subscription. Reasonable choices for this initial duration depend on the type of request indicated in the Refer-To URI. The duration SHOULD be chosen to be longer than the time the referenced request will be given to complete. For example, if the Refer-To URI is a SIP INVITE URI, the subscription interval should be longer than the Expire value in the INVITE. Additional time MAY be included to account for time needed to authorize the subscription. The subscribing agent MAY extend the subscription by refreshing it, or terminate it by unsubscribing. As described in Section 2.4.7, the agent accepting the REFER will terminate the subscription when it reports the final result of the reference, indicating that termination in the Subscription-State header field.
3.5 NOTIFY Bodies
The bodies of NOTIFY requests for event refer are discussed in Section 2.4.5.
3.6 Notifier processing of SUBSCRIBE requests
Notifier processing of SUBSCRIBE requests is discussed in Section 2.4.4.
3.7 Notifier Generation of NOTIFY Requests
Notifier generation of NOTIFY requests is discussed in Section 2.4.4.
3.8 Subscriber Processing of NOTIFY Requests
Subscriber processing of NOTIFY requests is discussed in Section 2.4.4.
3.9 Handling of Forked Requests
A REFER sent within the scope of an existing dialog will not fork. A REFER sent outside the context of a dialog MAY fork, and if it is accepted by multiple agents, MAY create multiple subscriptions. These subscriptions are created and managed as per "Handling of Forked Requests" in [2] as if the REFER had been a SUBSCRIBE. The agent sending the REFER manages the state associated with each subscription separately. It does NOT merge the state from the separate subscriptions. The state is the status of the referenced request at each of the accepting agents.
3.10 Rate of Notifications
An event refer NOTIFY might be generated each time new knowledge of the status of a referenced requests becomes available. For instance, if the REFER was to a SIP INVITE, NOTIFYs might be generated with each provisional response and the final response to the INVITE. Alternatively, the subscription might only result in two NOTIFY requests, the immediate NOTIFY and the NOTIFY carrying the final result of the reference. NOTIFYs to event refer SHOULD NOT be sent more frequently than once per second.
3.11 State Agents
Separate state agents are not defined for event refer.
4. Examples
4.1 Prototypical REFER callflow
Here are examples of what the four messages between Agent A and Agent B might look like if the reference to (whatever) that Agent B makes is successful. The details of this flow indicate this particular REFER occurs outside a session (there is no To tag in the REFER request). If the REFER occurs inside a session, there would be a non-empty To tag in the request.
Message One (F1)
REFER sip:b@atlanta.example.com SIP/2.0
Via: SIP/2.0/UDP agenta.atlanta.example.com;branch=z9hG4bK2293940223
To: <sip:b@atlanta.example.com>
From: <sip:a@atlanta.example.com>;tag=193402342
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 93809823 REFER
Max-Forwards: 70
Refer-To: (whatever URI)
Contact: sip:a@atlanta.example.com
Content-Length: 0
Message Two (F2)
SIP/2.0 202 Accepted
Via: SIP/2.0/UDP agenta.atlanta.example.com;branch=z9hG4bK2293940223
To: <sip:b@atlanta.example.com>;tag=4992881234
From: <sip:a@atlanta.example.com>;tag=193402342
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 93809823 REFER
Contact: sip:b@atlanta.example.com
Content-Length: 0
Message Three (F3)
NOTIFY sip:a@atlanta.example.com SIP/2.0
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK9922e9f992-25
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993402 NOTIFY
Max-Forwards: 70
Event: refer
Subscription-State: active;expires=(depends on Refer-To URI)
Contact: sip:b@atlanta.example.com
Content-Type: message/sipfrag;version=2.0
Content-Length: 20
SIP/2.0 100 Trying
Message Four (F4)
SIP/2.0 200 OK
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK9922e9f992-25
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993402 NOTIFY
Contact: sip:a@atlanta.example.com
Content-Length: 0
Message Five (F5)
NOTIFY sip:a@atlanta.example.com SIP/2.0
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK9323394234
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993403 NOTIFY
Max-Forwards: 70
Event: refer
Subscription-State: terminated;reason=noresource
Contact: sip:b@atlanta.example.com
Content-Type: message/sipfrag;version=2.0
Content-Length: 16
SIP/2.0 200 OK
Message Six (F6)
SIP/2.0 200 OK
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK9323394234
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993403 NOTIFY
Contact: sip:a@atlanta.example.com
Content-Length: 0
4.2 Multiple REFERs in a dialog
Message One above brings an implicit subscription dialog into existence. Suppose Agent A issued a second REFER inside that dialog:
<table>
<thead>
<tr>
<th>Agent A</th>
<th>Agent B</th>
</tr>
</thead>
<tbody>
<tr>
<td>F7 REFER</td>
<td></td>
</tr>
<tr>
<td>--------</td>
<td>--------</td>
</tr>
<tr>
<td>F8 202 Accepted</td>
<td></td>
</tr>
<tr>
<td>--------</td>
<td>--------</td>
</tr>
<tr>
<td>F9 NOTIFY</td>
<td></td>
</tr>
<tr>
<td>--------</td>
<td>--------</td>
</tr>
<tr>
<td>F10 200 OK</td>
<td></td>
</tr>
<tr>
<td>--------</td>
<td>--------</td>
</tr>
<tr>
<td>---------</td>
<td>-------</td>
</tr>
<tr>
<td>(something different)</td>
<td></td>
</tr>
<tr>
<td>-------</td>
<td>------</td>
</tr>
<tr>
<td>F11 NOTIFY</td>
<td></td>
</tr>
<tr>
<td>-------</td>
<td>------</td>
</tr>
<tr>
<td>F12 200 OK</td>
<td></td>
</tr>
<tr>
<td>-------</td>
<td>------</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Message Seven (F7)
REFER sip:b@atlanta.example.com SIP/2.0
Via: SIP/2.0/UDP agenta.atlanta.example.com;branch=z9hG4bK9390399231
To: <sip:b@atlanta.example.com>;tag=4992881234
From: <sip:a@atlanta.example.com>;tag=193402342
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 93809824 REFER
Max-Forwards: 70
Refer-To: (some different URI)
Contact: sip:a@atlanta.example.com
Content-Length: 0
Message Eight (F8)
SIP/2.0 202 Accepted
Via: SIP/2.0/UDP agenta.atlanta.example.com;branch=z9hG4bK9390399231
To: <sip:b@atlanta.example.com>;tag=4992881234
From: <sip:a@atlanta.example.com>;tag=193402342
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 93809824 REFER
Contact: sip:b@atlanta.example.com
Content-Length: 0
Message Nine (F9)
NOTIFY sip:a@atlanta.example.com SIP/2.0
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK9320394238995
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993404 NOTIFY
Max-Forwards: 70
Event: refer;id=93809824
Subscription-State: active;expires=(depends on Refer-To URI)
Contact: sip:b@atlanta.example.com
Content-Type: message/sipfrag;version=2.0
Content-Length: 20
SIP/2.0 100 Trying
Message Ten (F10)
SIP/2.0 200 OK
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK9320394238995
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993404 NOTIFY
Contact: sip:a@atlanta.example.com
Content-Length: 0
Message Eleven (F11)
NOTIFY sip:a@atlanta.example.com SIP/2.0
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK2994a93eb-fe
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993405 NOTIFY
Max-Forwards: 70
Event: refer;id=93809824
Subscription-State: terminated;reason=noresource
Contact: sip:b@atlanta.example.com
Content-Type: message/sipfrag;version=2.0
Content-Length: 16
SIP/2.0 200 OK
Message Twelve (F12)
SIP/2.0 200 OK
Via: SIP/2.0/UDP agentb.atlanta.example.com;branch=z9hG4bK2994a93eb-fe
To: <sip:a@atlanta.example.com>;tag=193402342
From: <sip:b@atlanta.example.com>;tag=4992881234
Call-ID: 898234234@agenta.atlanta.example.com
CSeq: 1993405 NOTIFY
Contact: sip:a@atlanta.example.com
Content-Length: 0
5. Security Considerations
The security considerations described in Section 26 of [1] apply to
the REFER transaction. In particular, the implementation
requirements and considerations in Section 26.3 address securing a
generic SIP transaction. Special consideration is warranted for the
authorization polices applied to REFER requests and for the use of
message/sipfrag to convey the results of the referenced request.
5.1 Constructing a Refer-To URI
This mechanism relies on providing contact information for the
referred-to resource to the party being referred. Care should be
taken to provide a suitably restricted URI if the referred-to
resource should be protected.
5.2 Authorization Considerations for REFER
As described in Section 2.4.2, an implementation can receive a REFER requests with a Refer-To URI containing an arbitrary scheme. For instance, a user could be referred to an online service such as a MUD using a telnet URI. Customer service could refer a customer to an order tracking web page using an HTTP URI. Section 2.4.2 allows a user agent to reject a REFER request when it can not process the referenced scheme. It also requires the user agent to obtain authorization from its user before attempting to use the URI. Generally, this could be achieved by prompting the user with the full URI and a question such as "Do you wish to access this resource (Y/N)". Of course, URIs can be arbitrarily long and are occasionally constructed with malicious intent, so care should be taken to avoid surprise even in the display of the URI itself (such as partial display or crashing). Further, care should be taken to expose as much information about the reference as possible to the user to mitigate the risk of being misled into a dangerous decision. For instance, the Refer-To header may contain a display name along with the URI. Nothing ensures that any property implied by that display name is shared by the URI. For instance, the display name may contain "secure" or "president" and when the URI indicates sip:agent59@telemarketing.example.com. Thus, prompting the user with the display name alone is insufficient.
In some cases, the user can provide authorization for some REFER requests ahead of time by providing policy to the user agent. This is appropriate, for instance, for call transfer as discussed in [4]. Here, a properly authenticated REFER request within an existing SIP dialog to a sip:, sips:, or tel: URI may be accepted through policy without interactively obtaining the user’s authorization. Similarly, it may be appropriate to accept a properly authenticated REFER to an HTTP URI if the referrer is on an explicit list of approved referrers. In the absence of such pre-provided authorization, the user must interactively provide authorization to reference the indicated resource.
To see the danger of a policy that blindly accepts and acts on an HTTP URI, for example, consider a web server configured to accept requests only from clients behind a small organization’s firewall. As it sits in this soft-creamy-middle environment where the small organization trusts all its members and has little internal security, the web server is frequently behind on maintenance, leaving it vulnerable to attack through maliciously constructed URIs (resulting perhaps in running arbitrary code provided in the URI). If a SIP UA inside this firewall blindly accepted a reference to an arbitrary HTTP URI, an attacker outside the firewall could compromise the web server. On the other hand, if the UA’s user has to take positive
action (such as responding to a prompt) before acting on this URI, the risk is reduced to the same level as the user clicking on the URI in a web-browser or email message.
The conclusion in the above paragraph generalizes to URIs with an arbitrary scheme. An agent that takes automated action to access a URI with a given scheme risks being used to indirectly attack another host that is vulnerable to some security flaw related to that scheme. This risk and the potential for harm to that other host is heightened when the host and agent reside behind a common policy-enforcement point such as a firewall. Furthermore, this agent increases its exposure to denial of service attacks through resource exhaustion, especially if each automated action involves opening a new connection.
User agents should take care when handing an arbitrary URI to a third-party service such as that provided by some modern operating systems, particularly if the user agent is not aware of the scheme and the possible ramifications using the protocols it indicates. The opportunity for violating the principal of least surprise is very high.
5.3 Considerations for the use of message/sipfrag
Using message/sipfrag bodies to return the progress and results of a REFER request is extremely powerful. Careless use of that capability can compromise confidentiality and privacy. Here are a couple of simple, somewhat contrived, examples to demonstrate the potential for harm.
5.3.1 Circumventing Privacy
Suppose Alice has a user agent that accepts REFER requests to SIP INVITE URIs, and NOTIFYs the referrer of the progress of the INVITE by copying each response to the INVITE into the body of a NOTIFY.
Suppose further that Carol has a reason to avoid Mallory and has configured her system at her proxy to only accept calls from a certain set of people she trusts (including Alice), so that Mallory doesn’t learn when she’s around, or what user agent she’s actually using.
Mallory can send a REFER to Alice, with a Refer-To URI indicating Carol. If Alice can reach Carol, the 200 OK Carol sends gets returned to Mallory in a NOTIFY, letting him know not only that Carol is around, but also the IP address of the agent she’s using.
5.3.2 Circumventing Confidentiality
Suppose Alice, with the same user agent as above, is working at a company that is working on the greatest SIP device ever invented - the SIP FOO. The company has been working for months building the device and the marketing materials, carefully keeping the idea, even the name of the idea secret (since a FOO is one of those things that anybody could do if they’d just had the idea first). FOO is up and running, and anyone at the company can use it, but it’s not available outside the company firewall.
Mallory has heard rumor that Alice’s company is onto something big, and has even managed to get his hands on a URI that he suspects might have something to do with it. He sends a REFER to ALICE with the mysterious URI and as Alice connects to the FOO, Mallory gets NOTIFYs with bodies containing
Server: FOO/v0.9.7
5.3.3 Limiting the Breach
For each of these cases, and in general, returning a carefully selected subset of the information available about the progress of the reference through the NOTIFYs mitigates risk. The minimal implementation described in Section 2.4.5 exposes the least information about what the agent operating on the REFER request has done, and is least likely to be a useful tool for malicious users.
5.3.4 Cut, Paste and Replay Considerations
The mechanism defined in this specification is not directly susceptible to abuse through copying the message/sipfrag bodies from NOTIFY requests and inserting them, in whole or in part, in future NOTIFY requests associated with the same or a different REFER. Under this specification the agent replying to the REFER request is in complete control of the content of the bodies of the NOTIFY it sends. There is no mechanism defined here requiring this agent to faithfully forward any information from the referenced party. Thus, saving a body for later replay gives the agent no more ability to affect the mechanism defined in this document at its peer than it has without that body. Similarly, capture of a message/sipfrag body by eavesdroppers will give them no more ability to affect this mechanism than they would have without it.
Future extensions may place additional constraints on the agent responding to REFER to allow using the message/sipfrag body part in a NOTIFY to make statements like "I contacted the party you referred me to, and here’s cryptographic proof". These statements might be used
to affect the behavior of the receiving UA. This kind of extension will need to define additional mechanism to protect itself from copy based attacks.
6. Historic Material
This method was initially motivated by the call-transfer application. Starting as TRANSFER, and later generalizing to REFER, this method improved on the BYE/Also concept of the expired draft-ietf-sip-cc-01 by disassociating transfers from the processing of BYE. These changes facilitate recovery of failed transfers and clarify state management in the participating entities.
Early versions of this work required the agent responding to REFER to wait until the referred action completed before sending a final response to the REFER. That final response reflected the success or failure of the referred action. This was infeasible due to the transaction timeout rules defined for non-INVITE requests in [1]. A REFER must always receive an immediate (within the lifetime of a non-INVITE transaction) final response.
7. IANA Considerations
This document defines a new SIP method name (REFER), a new SIP header field name with a compact form (Refer-To and r respectively), and an event package (refer).
The following has been added to the method sub-registry under http://www.iana.org/assignments/sip-parameters.
```
REFER [RFC3515]
```
The following information also has been be added to the header sub-registry under http://www.iana.org/assignments/sip-parameters.
```
Header Name: Refer-To
Compact Form: r
Reference: RFC 3515
```
This specification registers an event package, based on the registration procedures defined in [2]. The following is the information required for such a registration:
```
Package Name: refer
Package or Package-Template: This is a package.
```
Acknowledgments
This document is a collaborative product of the SIP working group.
References
9.1 Normative References
9.2 Informative References
10. Intellectual Property Statement
The IETF takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on the IETF’s procedures with respect to rights in standards-track and standards-related documentation can be found in BCP-11. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementors or users of this specification can be obtained from the IETF Secretariat.
The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to practice this standard. Please address the information to the IETF Executive Director.
11. Author’s Address
Robert J. Sparks
dynamicsoft
5100 Tennyson Parkway
Suite 1200
Plano, TX 75024
EMail: rsparks@dynamicsoft.com
12. Full Copyright Statement
Copyright (C) The Internet Society (2003). All Rights Reserved.
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.
The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Acknowledgement
Funding for the RFC Editor function is currently provided by the Internet Society.
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc3515.pdf", "len_cl100k_base": 9023, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 45334, "total-output-tokens": 10126, "length": "2e13", "weborganizer": {"__label__adult": 0.0004162788391113281, "__label__art_design": 0.00038552284240722656, "__label__crime_law": 0.0020275115966796875, "__label__education_jobs": 0.001155853271484375, "__label__entertainment": 0.0003077983856201172, "__label__fashion_beauty": 0.00016224384307861328, "__label__finance_business": 0.0012044906616210938, "__label__food_dining": 0.0003046989440917969, "__label__games": 0.0013666152954101562, "__label__hardware": 0.0029811859130859375, "__label__health": 0.00032711029052734375, "__label__history": 0.0004334449768066406, "__label__home_hobbies": 6.842613220214844e-05, "__label__industrial": 0.0004973411560058594, "__label__literature": 0.0005650520324707031, "__label__politics": 0.000701904296875, "__label__religion": 0.0005173683166503906, "__label__science_tech": 0.1162109375, "__label__social_life": 0.0001556873321533203, "__label__software": 0.228515625, "__label__software_dev": 0.640625, "__label__sports_fitness": 0.0003161430358886719, "__label__transportation": 0.00045990943908691406, "__label__travel": 0.0002256631851196289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37130, 0.07798]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37130, 0.19563]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37130, 0.87738]], "google_gemma-3-12b-it_contains_pii": [[0, 1950, false], [1950, 2550, null], [2550, 4218, null], [4218, 6114, null], [6114, 6342, null], [6342, 8220, null], [8220, 10555, null], [10555, 12356, null], [12356, 14640, null], [14640, 16449, null], [16449, 18101, null], [18101, 18882, null], [18882, 20339, null], [20339, 21373, null], [21373, 22853, null], [22853, 24426, null], [24426, 27308, null], [27308, 29523, null], [29523, 31947, null], [31947, 33715, null], [33715, 35486, null], [35486, 35618, null], [35618, 37130, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1950, true], [1950, 2550, null], [2550, 4218, null], [4218, 6114, null], [6114, 6342, null], [6342, 8220, null], [8220, 10555, null], [10555, 12356, null], [12356, 14640, null], [14640, 16449, null], [16449, 18101, null], [18101, 18882, null], [18882, 20339, null], [20339, 21373, null], [21373, 22853, null], [22853, 24426, null], [24426, 27308, null], [27308, 29523, null], [29523, 31947, null], [31947, 33715, null], [33715, 35486, null], [35486, 35618, null], [35618, 37130, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37130, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37130, null]], "pdf_page_numbers": [[0, 1950, 1], [1950, 2550, 2], [2550, 4218, 3], [4218, 6114, 4], [6114, 6342, 5], [6342, 8220, 6], [8220, 10555, 7], [10555, 12356, 8], [12356, 14640, 9], [14640, 16449, 10], [16449, 18101, 11], [18101, 18882, 12], [18882, 20339, 13], [20339, 21373, 14], [21373, 22853, 15], [22853, 24426, 16], [24426, 27308, 17], [27308, 29523, 18], [29523, 31947, 19], [31947, 33715, 20], [33715, 35486, 21], [35486, 35618, 22], [35618, 37130, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37130, 0.11948]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1186ccaa245bb6d63f621a63f6ab52f1a696d332
|
Thread Migration and its Applications in Distributed Shared Memory Systems*
Ayal Itzkovitz Assaf Schuster Lea Wolfovich
Computer Science Department
Technion - IIT
{ayali, assaf, wlea}@cs.technion.ac.il
Abstract
In this paper we describe the way thread migration can be carried in distributed shared memory (DSM) systems. We discuss the advantages of multi-threading in DSM systems and the importance of preempted dynamic thread migration. The proposed solution is implemented in MILLEPEDE: an environment for parallel programming over a network of (personal) computers. MILLEPEDE implements transparent thread migration mechanism: a thread in a MILLEPEDE application can be suspended almost at every point during its life-time and be resumed on another host. This mechanism can be used to better utilize system resources and improve performance by balancing the load and solving ping-pong situations of memory objects, and to provide user ownership on his workstation. We describe how some of these are implemented in the MILLEPEDE system. MILLEPEDE, including its thread migration module, is fully implemented in user-mode (currently on Windows-NT) using the standard operating system APIs.
1 Introduction
Many attempts are made to integrate the resources and services of distributed computational environments into virtual parallel machines, or: metacomputing environments. While being very cheap and available to everyone, such metacomputing environments will exhibit very high computational power, large virtually shared memory, and high bandwidth of I/O and communication. Applications using these environments will have to be dynamically adaptive to the varying network configurations, utilizing idle resources and instantly evicting those resources reclaimed by their native users.
In order to integrate the resources of a distributed environment some form of cooperation among the nodes (or, computers) is necessary [Casavant and Kuhl, 1988; Chase et al., 1989; Krueger and Livny, 1988; Kumar et al., 1987; Willebek-Le-Mair and Reeves, 1993]. Dynamic load sharing is the form of load distribution that has a potential of being efficient in a distributed system [Eager and Lazowska, 1986; Kremien, 1993]. Load-sharing algorithms attempt to assure that there are no idle hosts when there are tasks waiting for execution on other hosts. This is achieved by dynamic initial placement and by migration after startup. Multithreading further helps to achieve better load distribution between the nodes in the system by splitting the application into smaller chunks of work.
However, distributing an application over the network has its drawbacks. Components of an application need to communicate and synchronize, imposing overhead that, due to the relatively inefficient communication, is typically very high [Kumar et al., 1993]. In fact, the optimal speedup in a distributed environment is commonly obtained by using fewer processors than the total number of those that are idle and available. The exact distribution of the machines that take part in the computation must be determined dynamically, according to both the system varying capabilities and the application varying needs. The solution to all these can be found by using multithreading and thread migration: Multithreading can hide the latency by overlapping communication and computation. Thread migration can significantly reduce the amount of communication in DSM systems, by migrating threads in order to improve locality of shared data accesses.
Although most of the power of metacomputing environments will typically consist of personal machines, degrading of interactive response must be avoided. If the owner of a machine or a resource is not guaranteed to receive it at the moment he attempts to use it he will not allow “invasion” of remote execution in the future [Douglish and Ousterhout, 1991]. To this end, once again thread migration is the answer which may be used to provide user ownership in an efficient way.
Part of the machines in a metacomputing environment may be symmetric multiprocessors (SMP), which are tightly-coupled “shared-all” multiprocessor machines. In an SMP system all the components such as processors, physical memory, buses, disks and controllers are shared. A single copy of an operating system controls all components, manages the shared memory, and balances the load among the processors by dynamically reassigning processors to threads. Here, using multiple threads make it possible to utilize the processors in a transparent and efficient way.
SMP systems are becoming widely available and it is expected that this process will promote the development of parallel applications that use multithreading and shared memory.
From the application point of view, non-scalable parallel computing on SMP machines with shared memory, and scalable parallel computing on metacomputing environments with virtually shared memory, are at the same level of abstraction. Thus, given an efficient run-time support for metacomputing environments, the transition from parallel computing on SMPs to parallel computing on distributed environments is just a small, natural step.
As argued above, an efficient support for metacomputing environments must include migration of threads between machines. Unfortunately, implementing thread migration is not an easy task. In this paper we discuss the problems and complications of such implementations, with a special emphasis on the relation to a possible neighbor DSM mechanism. We describe some wrong solutions that appear in the literature and present a good solution that is implemented in the MILLIPEDE virtual parallel machine.
We then proceed to describe the way thread migration is utilized in the MILLIPEDE system. MILLIPEDE is a thread-based system for the development and execution of parallel applications in distributed environments. It presents a strong application interface, including a flexible DSM mechanism along with a dynamic thread scheduling algorithm. The thread scheduling algorithm strives to reach the optimal speedup by dynamically solving the tradeoff between minimal load and minimal communication. It also tries to minimize communication by migrating both threads and pages between machines, until maximal data locality is achieved. To this end, MILLIPEDE implements a transparent thread migration mechanism that is used by the thread scheduler.
MILLIPEDE is currently implemented on the Windows-NT operating system, using its support for multithreading and SMP thread scheduling. Detailed description of the MILLIPEDE system can be found in [Itzkovitz et al., 1996a].
Related Systems
We now discuss the main differences between MILLIPEDE and several other systems that support thread migration.
- UPVM is a package that supports multi-threading and transparent migration for PVM applications [Casas et al., 1994]. UPVM defines an abstraction having some of the characteristics of a thread and some of a process, called a user level process (ULP). ULPs differ from threads in that they define a private data and heap space. ULPs communicate with each other via message passing. The ULP state that is transferred when a ULP migrates includes the context, the stack, the data, and the heap.
As in MILLIPEDE, mapping of a ULP to a set of virtual addresses is unique across all the processes of the application. The difference is that MILLIPEDE threads keep their non-local data in shared memory that need not be transferred explicitly at migration time. The memory usage of a thread triggers the migration of pages to its new location meaning that only the data that is actually used by the migrated thread is transferred on demand, thus decreasing the cost of migration in MILLIPEDE.
- Ariadne is a user-space threads system that runs on shared- and distributed-memory multiprocessors. In contrast to Ariadne, MILLIPEDE uses operating-system supported
threads (also called kernel threads in UNIX-like environments). The advantage of user-space threads is their relative portability, since they may be implemented on an operating system that does not support threads. In addition, context switching between user threads is faster than context switching between kernel threads. However, this may change in the future, since next generations of processors may support thread context switching in hardware, thus making switching of kernel threads less expensive than that of user threads.
There are two main disadvantages of user threads. First, a user thread that blocks on an internal page fault or a system call causes blocking of its process. If kernel threads are used, a thread that blocks does not prevent other threads of the same process from running. Another advantage of kernel threads is that on an SMP they are scheduled by the operating system automatically on available processors. With user threads, an application has to be modified explicitly in order to use multiple processors. In Ariadne, additional processes are created for this purpose imposing high overhead.
Similar to Millipede, thread migration in Ariadne is supported at user level in homogeneous environments. However, the mechanism of migration in Ariadne is different from Millipede. We further discuss Ariadne’s thread migration and the problems that are associated with it in Section 3.3.1.
- Amber [Chase et al., 1989] is an object-oriented DSM system that permits a single application to use a homogeneous network of computers. Each node may be a shared-memory multiprocessor. Amber supports data and thread migration; the location of objects is managed explicitly by an application. The mechanism of thread migration is essentially the same as in Millipede. The difference is that with Millipede a programmer does not have to deal with the data and threads location issues, since Millipede provides a location-independent interface and automatically improves locality of data accesses at run-time.
The rest of this paper is organized as follows. In Section 2 we discuss our motivation for using preemptive multithreaded DSM systems. Section 3 discusses some global aspects of thread migration, explains the various approaches introduced so far for its implementation, and proposes a new approach for implementing thread migration in user-space which is applicable on most existing operating systems. Section 4 gives an overview of the Millipede system and discusses its implementation of thread migration. Section 5 describes the way Millipede utilizes thread migration in order to share the load and improve the locality of memory references. Section 6 presents some measures taken with the Millipede system on a non-homogeneous environment, essentially giving examples to possible improvements of performance that are enabled by thread migration. Finally, Section 7 gives some concluding remarks.
## 2 Motivation and Discussion
In this section we discuss the advantages of the DSM model combined with multithreading. We also explain the benefits of dynamic load distribution schemes and thread migration in
multithreaded DSM systems.
**Why DSM systems**
Distributed Shared Memory (DSM) is an implementation of a shared memory paradigm on a physically distributed system [Keleher et al., 1994; Li and Hudak, 1989]. Parallel programming in this model is easy, since the DSM is a natural generalization of sequential programming. Furthermore, with a DSM it is relatively easy to parallelize sequential programs. In this model, components of an application communicate using a virtually shared memory. Local and remote data accesses are carried in a way transparent to the programmer, serviced by the underlying DSM mechanism. This makes DSM applications both easier to develop and more portable (across DSM architectures) than programs that use explicit message passing. In particular, metacomputing environments which exhibit virtually shared memory (and may consist of the cooperation of large suits of various machines and resources), are at the same level of abstraction as that of multiprocessor machines with physically shared memory. In fact, the programming paradigms are at the same level of abstraction as that of a multithreaded uniprocessor machine.
With the rapid growth of popularity and availability of SMPs, it is expected that more users will attempt to utilize the power of their machines by parallelizing them. This will lead to a growing set of available parallel applications. These applications will assume the convenient programming paradigm provided by their native multiprocessor machines; namely, multithreaded parallel computing with shared memory, that does not assume a dedicated machine. Given this expected large volume of applications, it is just a natural step to provide this interface (including in particular the DSM) also on top of physically distributed, metacomputing environments. Such metacomputing environments have the additional advantage over SMPs of being scalable to higher levels of parallelism.
**Why dynamic load sharing**
Load distribution is necessary in a distributed system for better utilizing its computational power. Various load balancing and load sharing algorithms appear in the literature. In general, the purpose of the load balancing operation is to split the work evenly among the processors, whereas the approach of load sharing algorithms is to ensure that no processor stays idle or slightly loaded when there are heavily loaded processors in the system.
Static load distribution strategies are effective when applied to problems that can be partitioned into tasks with uniform computation and communication requirements. An additional requirement of static algorithms is that the environment is homogeneous, i.e., all machines in the system should have identical hardware parameters (such as processor speed) and similar load resulting from other activities. There exist, however, a large number of problems with non-uniform and unpredictable computation and communication requirements. Also, machines in a non-dedicated network of computers (such as a metacomputing environment) will commonly differ in their speed and load state; part of them may be even unavailable at certain times. Therefore, dynamic load distribution is essential both for efficient solving of non-uniform problems and for solving uniform problems in a non-uniform
environment. Thus, it seems that in a metacomputing environment applying either dynamic load balancing or dynamic load sharing is unavoidable.
The overhead imposed by dynamic load balancing in a large distributed system may outweigh its potential benefits for the following reasons. First, equalizing the load among all nodes in the system requires large amounts of precise, global information concerning the state of all the machines. For fairly large systems this may violate the scalability requirement. Furthermore, when the overall system load is high, load balancing strategies will cause transfer of work from highly overloaded hosts to other hosts that are overloaded as well. This may improve performance in some cases, e.g., when iterations of a loop are scheduled on a uniform system. However, with environment such as a network of workstations, this strategy will only impose additional overhead, and may even cause an unstable behavior.
In contrast to load balancing algorithms, dynamic load sharing strategies have a potential of achieving resource utilization that is almost as good at a much lower cost. Due to their relaxed requirements, load sharing algorithms may avoid the need for global information, using restricted local information only. The algorithm may do very well even if machines know the status of only part of the other machines in the system. Moreover, the information may be less precise than that needed for load balancing, and thus may be exchanged less frequently. Another advantage is that load sharing algorithms can be designed so that no overhead is imposed when all nodes in the system have enough work to do. This makes load sharing strategies potentially more efficient, especially in dynamically changing environments.
**Why multithreaded DSM systems**
Multiple threads within a process share its virtual space. Threads are the basic entity to which the operating system allocates the CPU time. On a multiprocessor system executable threads are distributed among the available processors. Therefore multithreading allows an application to take advantage of an SMP architecture by using all the processors on a node in a way that is transparent to the programmer, and is natural to a shared memory application. As long as the level of parallelism in the application exceeds that of the actual machine, it need not be changed in order to utilize multiple processors. Modern operating systems balance the load among the processors of the machine when enough threads are available, and this load distribution need not be programmed in advance. In addition to better utilization of multiprocessor machines, using multiple threads allows also better load distribution over the network, when the level of parallelism provided by the application is sufficiently high.
In an environment that does not support threads, an application should be divided into multiple processes in order to be parallelized. However, the cost of communication, synchronization, and context switching between processes, is a lot higher than that of multiple threads that share the resources in the same process. The reasons are that the threads can exchange data efficiently using the shared virtual address space, that their context is small relative to that of a process, and that their working sets may overlap, so that in many cases context switching between threads does not cause swapping, while process switch would do.
Some additional overhead may be imposed by multithreading due to the need to switch contexts. This switching commonly occurs in a remote access. The associated overhead is
thus justified, as it implies that the time one thread is awaiting for the remote access to complete (called the latency of the system) is overlapped by a computation that is carried by a different thread. In this way we avoid stalling the processor during remote accesses that may be frequent in a large metacomputing environment. When kernel threads are used this overlap of communication and computation is easy, natural and efficient, by the automatic scheduling of the operating system.
Another advantage of using multiple threads is the reduced cost of migration. Migrating threads is less expensive than migrating a process since process migration requires transferring all virtual space of a process [Zayas, 1987], while migration of threads in a DSM system requires only the transfer of memory occupied by the threads’ stacks.
Why thread migration
Dynamic initial placement of threads solves part of the problems arising from non-uniform problem or environment. However additional performance improvement can be achieved by thread migration for the following reasons:
1. Load may change quickly, causing poor utilization of processors. Therefore redistribution of the load is necessary.
2. Poor initial placement of threads may cause large communication overhead. In a DSM system this happens when threads that are executing on different hosts are using the same data. In such a case migrating these threads to one host turns the remote data accesses into local ones, thus reducing communication overhead.
For a network of personal computers, there is one more reason for migration being important. A user expects to receive the full resources from the machine he is using. Therefore, threads executing remotely should not degrade interactive response. To achieve this, threads should be executed remotely only on idle machines, and if a user returns before they finish, they should be stopped. Thread migration mechanism makes it possible to continue the execution of such threads on other hosts.
3 Designing Thread Migration in a DSM System
This section discusses problems that need be solved when designing a user-level thread migration in a non-distributed operating system. We make several assumptions on the underlying system. We consider operating systems that provide support to multithreading at the kernel level. Migration is only supported across machines with processors of the same architecture, running the same operating system. We assume that the migration issues are transparent to the application. In particular, migration may occur at any moment during a thread’s life-time, and not only in predefined points (where the thread is checking if it should migrate). We also assume that a conventional compiler is used, so that no extra information about threads’ state is available.
3.1 Requirements From the Operating System
The following services by the operating system are vital in order to support a user-level implementation of a combined DSM and thread migration:
- Virtual address space that is arranged identically for each instance of an application. Namely, the code and the static data reside at the same virtual addresses in each copy of a program.
- Protection of pages in virtual memory and exception handling on a protection fault.
- Interface for the creation and management of threads, including a mechanism for obtaining and updating a thread's state.
- Some mechanism for resetting the location of threads’ stacks. It should be possible to reserve a range of virtual addresses for the stack of a thread.
The reasons behind the above requirements are described below.
3.2 Restrictions on Thread Migration
Here we describe the thread state and the problems that arise when a migration of a thread occurs, i.e. when a thread is stopped on one host and is resumed on another one in the same state. We identify the restrictions on the state of the migrated thread that are necessary to make the migration possible.
Thread state consists of global data and thread-specific information: stack contents, registers values and operating system internal control information. In the DSM model global data is assumed to be allocated in shared memory, so it should not be transferred explicitly when a thread migrates (this will be done by the DSM when needed, i.e. when a migrated thread will attempt to access this data). On the other hand, the stack contents and the register values must be transferred at migration time.
The stack and the registers may contain pointers to code, global data or data in the stack. A potential problem is that these pointers may not have the same meaning on different hosts. Thus, it is necessary either to ensure that the pointers will retain their meaning, or to provide some translation mechanism. We are assuming that program code and static data are automatically placed by the operating system at the same virtual addresses in each copy of the program; DSM addresses also have the same meaning in each instance, so the only problem that should be treated is the pointers to data in the stack. This problem is discussed in detail in section 3.3.
Another important issue is the usage of the system calls. A user can not access the internal control information of the operating system, so it can not be updated or transferred when a thread migrates. Therefore, a thread that owns system resources cannot migrate. For example, a thread that entered a critical section (using the corresponding system call) and did not leave it yet owns the critical section object; its migration in this state will prevent other threads from entering the critical section. Releasing the critical section on the destination host will not make much sense, because in a non-distributed operating system
object handles are meaningful only on the host they were created on. It might be possible to redirect such calls to relevant machines, but this requires redefinition of all the system calls, and in addition increases the cost of remote execution.
Many system calls (especially those used for synchronization) cannot be used explicitly in user level in a distributed system that supports thread migration, because the location of a job may change at any time. For example, jobs cannot communicate via pipes, since they have no information about each other’s location. Even if they do have such information, the location of a job may change after a message was sent to it and before it arrives. Thus, some other mechanism of synchronization is necessary in such systems. Using DSM for this purpose might be extremely ineffective; for example implementing critical section using shared variables inevitably involves busy-waiting and in addition imposes high communication overhead associated with synchronization of these variables.
3.3 Implementing Thread Migration
We now describe and discuss several approaches to the problem of transferring the stack contents of a migrating thread.
3.3.1 A simple approach that fails
The approach described here is used, for instance, in the Ariadne system [Mascarenhas and Rego, 1996]. We will describe the method itself and the problems that it may cause, and try to explain why and when it works. The method is as follows. When a thread migrates the contents of its stack on the destination machine is copied to addresses that might differ from addresses on the source machine. Let us call stack self-references pointers that reside in the stack and reference some data that also reside in the stack. These self-references, as well as the stack pointer and the frame pointer, have to be translated when the stack is moved to different addresses. The offset that is used for this purpose consists of the difference between the stack bottom address on the origin machine and the stack bottom address on the destination machines.
The stack contains two types of self-references: saved frame pointers and addresses of stack data. The latter may reside in the stack in several ways: as parameters to functions, as values of local variables, as values of saved registers, or as intermediate values used by a compiler. The method suggested in [Mascarenhas and Rego, 1996] is to identify such references and update them (details are not provided). Saved frame pointers are easily identifiable, so they can be updated correctly. The problem is that local data addresses in the stack can not be identified in the general case. They may be everywhere in the stack; the data in the stack may be even mis-aligned (if compiler alignment must be disabled for some reason). The only way such addresses might be updated without some additional information is to prohibit the use of data types such as char that may cause misalignment in the stack, to examine the value of each aligned entry in the stack and update it if it may be a stack self-reference, and to hope that this updated value was not a non-pointer that accidentally looks like pointer to the stack data.
Consider the following example. Let us suppose that the nodes in the system use perfectly synchronized clocks; an application orders events of some type using timestamps. A thread
performs an operation \textit{get\_time} that returns the number of milliseconds that passed from some predefined moment. The thread stores the obtained value in a local variable \( t \) that resides in its stack. At this point the thread is preempted, and later it migrates to another host. If the value of \( t \) is in the range of stack addresses of the thread, it will be updated as if it were a stack reference. If now the thread will store the variable \( t \) as a timestamp of an event, the event ordering may become incorrect.
Another problem with this approach is that general purpose registers may also contain pointers to stack data. It is claimed in [Mascarenhas and Rego, 1996] that this occurs only when compiler optimizations are used, but this claim is clearly incorrect. Thus, values of registers must be updated too, causing the same problem as that of identifying references to stack data.
We believe that translating the state correctly in the general case when this method is used is impossible without compiler support. A natural question to ask is how this method works in systems that use it. The answer is that the probability of correct operation is high, given that only aligned data is used, that migration is initiated by the migrating thread itself (thus eliminating the problem of temporary addresses in registers), and that there is a little amount of non-pointer values on the stack. However, these limitations do not guarantee correctness of the state translation in the general case.
### 3.3.2 A popular approach
Since translating the pointers is impossible without extensive compiler support, it is necessary to ensure that the pointers will retain their meaning after migration. To achieve this, the segment of virtual memory occupied by the stack on one host is reserved for it on all other hosts, so that the stack contents can be copied to the same addresses when a thread migrates.
The popular method (incorporated, e.g., in [Chase et al., 1989; Casas et al., 1994; Dubrovski, 1996]) for reserving memory for stacks is as follows. A region of virtual memory starting at a predefined location is reserved for the threads' stacks on every host; each thread is assigned a unique identification number that is used to find the thread's slot in the stack region. A newly created thread is forced to use the proper slot as its stack. Moreover, this slot can be allocated from the DSM, so the stack need not to be explicitly transferred; it will be transparently brought by the DSM mechanism when needed. This method is very easy to implement when user-level threads are used, so that a programmer has control over the locations of thread stacks. With kernel-level threads, the situation is more difficult since in this case the stacks are usually allocated by the operating system. This problem may be solved in the following way. The register context of a newly created thread is changed so that the thread will use the proper slot instead of the original stack; this is performed before the thread starts executing and before any values are written into the stack.
This method can be used in many systems; however, it has a serious disadvantage. Namely, it is based on the assumption that the operating system behavior does not depend on initial location of thread stacks. This is not the case for some existing systems; for example, the Windows-NT operating system checks validity of a threads' stack pointer in certain cases, and if it decides that the stack pointer is illegal it just terminates the program. It definitely will decide so if a thread will use a stack at location other than the original
one (registered by the operating system). Moreover, even if the assumption above is true in some operating system, it may be violated in its future versions. Thus, this approach lacks portability.
### 3.3.3 Our approach
We solve the problem described above by using stacks allocated by the operating system while ensuring that these stacks will occupy the same addresses on all hosts.
A user application defines blocks of code that can be executed in parallel. These blocks are called jobs. The jobs are executed by separate threads. Instead of creating a thread each time a new job is spawned in a user program, a predefined number of threads called workers are used to receive jobs and execute them. The workers are created on each host at initialization time and run until the application completes. Since the virtual space of all copies is initially arranged identically and all instances perform their initialization in the same way, the copies of the same worker running on different hosts get their stacks at the same addresses. In this way the addresses are reserved for the stacks. A job that was already started by worker $i$ can be executed on any copy of this worker, i.e., on worker $i$ at any other host. To make sure that migration is always possible, at most one copy of each worker is executing a job at any given time. All idle workers are suspended.
As with the previous approach, the number of threads that can be created simultaneously on all nodes in the system is limited since the threads share a single address space among all hosts. If this limit is too low, a single application would not benefit from a massively parallel architecture. However, this problem may be eliminated when 64-bit architectures will be used, so that the limit on the number of threads will be high enough.
### 4 Architecture of the MILLIPEDE System
In this section we give a brief overview of the MILLIPEDE DSM system and its relation to the MILLIPEDE thread migration mechanism. MILLIPEDE is a user-level implementation of a multi-threaded DSM system with transparent page- and job- migration. The current implementation of MILLIPEDE is on Windows-NT operating system, and employs our proposed design for thread migration.
#### 4.1 Assumptions
MILLIPEDE was designed for the following type of environment and applications:
- **Homogeneous environment.** A network consisting of machines with processors of the same architecture is assumed, so that the representation of the program’s state is the same on all machines. Processors may differ in their speed. The network may include SMP machines.
- **Coarse granularity.** The overhead associated with creating a thread and with initiating remote execution is relatively high. Therefore a thread should have sufficient
amount of computation to do in order to justify the cost of its creation or migration. Thus, we assume that the expected lifetime of a thread is relatively high.
- **Unpredictable computation and communication requirements.** Requests for the execution of one or more threads arrive in an arbitrary timing. No assumption is made about the memory access patterns of the threads. No a-priori knowledge is assumed about the relative amounts of communication and computation used by the applications.
## 4.2 System Overview
Each machine in the system runs a **Millipede** Daemon: a process that is in charge of collecting and disseminating of load information, of managing **Millipede** applications, and of the dynamic load sharing (Figure 1).
**Millipede** package includes two libraries: the DSM and the MGS. The DSM library provides interface for allocating distributed shared memory and keeping it consistent; it supports various memory consistency protocols (see [Itzkovitz and Schuster, ]). The MGS (Migration Server) library provides interface for creating multiple parallel activities and for managing them; it controls their locations and performs the migration.
**Millipede** applications are written in a parallel language independent of the underlying operating system, of the number of available processors, and of the data and threads location. Currently ParC [Ben-Asher et al., 1996] (a natural parallel extension of C) is supported (porting is underway for ParFortran90, ParC++, and Java). A ParC program is precompiled; the resulting C code is compiled using a conventional C compiler and is linked with the DSM and MGS libraries. The libraries are independent of the ParC language constructs; they provide an interface that allows implementation of any similar precompiler for any other language. It is also possible to write an application in a conventional language and use the libraries directly; this (less convenient) way may be used if the application requires some exotic synchronization method that is not supported by existing precompilers. The interface is further explained in [Itzkovitz et al., 1996b].
A **Millipede** application consists of instances (copies) of a user program running on different nodes in the system (see Figure 2). If a node is an SMP, all available processors are used in a transparent way by a single instance of an application. Instances of an application share single virtual space. They communicate in a location-independent way via the DSM mechanism and synchronize using the MGS primitives.
An instance of an application consists of the following parts (Figure 2):
- A pool of workers: system threads that receive user jobs and execute them.
- Memory manager: threads needed to keep the DSM consistent.
- Migration Server (MGS): threads that take part of the decisions whether to migrate, and handle the migration of jobs to and from the host.
Figure 1: MIIPIEDE structure
Figure 2: Instance of a MIIPIEDE application
4.3 Relation Between the DSM and the MGS Libraries
Thread migration mechanism in MILLIPEDE is based on the assumption that all non-local data that is used by a thread resides in the DSM. Using the DSM is location-independent. Therefore when a thread migrates, only its stack and context should be transferred.
The MGS collects information provided by the DSM to make decisions on migration, and in some cases also affects decisions of the DSM mechanism as described below. The DSM mechanism passes to the Migration Server information about remote page accesses. The MGS uses this information to determine if threads should be redistributed to decrease communication. In some cases the MGS may affect behavior of the DSM by advising it to lock a page on the local host for a short time. In this way it is possible to stabilize the system when remote data accesses are causing high communication overhead, but thread migration is not possible.
The MGS also uses the DSM mechanism to store part of the necessary information. The MGS of each instance should keep track of the location of each running thread. Since a thread may migrate several times, keeping this information consistent on each host may be expensive. We solve this problem simply by using the DSM to store the threads’ locations.
4.4 Thread Migration in MILLIPEDE
4.4.1 Migration policy
Thread migration in MILLIPEDE is transparent to an application. A thread may be suspended at almost any moment and resumed on another host. Thread migration occurs in the following cases:
- An overloaded node sends work to an underloaded one to decrease load imbalance.
- Threads that are causing high communication overhead are brought together.
- Remote threads are evicted by the machine when a native user starts working on it.
MILLIPEDE uses history of remote page accesses for making decisions on migration, where the objective is to minimize the amount of communication. The MGS “learns” about the communication pattern of the threads by recording remote page accesses. What make things interesting is that – for performance reasons – the information about local accesses is not recorded. Thus, the knowledge about the communication pattern is incomplete and incorrect decisions may be taken. For example, in the case that all threads which frequently access the same page are running on the same host, the MGS is not (initially) informed about it, so it may choose one of these threads for migration to another host. This will be a poor decision, since it will cause repeated transferring of this page between the hosts (a page ping-pong). However, information about this page will become available, making it possible to correct this decision and to keep the obtained information in order to improve future decisions. The detailed description of the algorithms that are used to decide that migration will take place and to select threads that will migrate can be found in [Schuster and Shalev (Wolfovich), 1997].
4.4.2 Migration implementation
The thread migration is implemented in user-level in Windows-NT, using standard Win32 API. The same implementation may be used in the Windows-95 environment as well. As we explained in Section 3.3.3, a pool of workers is used, where workers are threads that receive user jobs and execute them (Figure 2). The workers are created in each instance of an application when the MGS library performs its initialization; they run until the application completes. The copies of the same worker running on different hosts get their stacks at the same addresses (Figure 3); therefore a job that was already started by worker i can be executed on any copy of this worker, i.e. on worker i on any other host. To make sure that migration is always possible, at most one copy of each worker is executing a job at any given time. All idle workers are suspended.
The problem of using system calls is solved by providing a location-independent interface and by migrating only the jobs that do not own operating system resources and are executing user-level code, so that their state can be simulated on another host. The details follow.
Jobs are not allowed to use system calls explicitly, unless they notify the MGS. Suppose a job wants to display some data on a graphic window. Then it cannot migrate from the moment it starts to create the window and until it finishes closing it. The MGS should be informed about it; otherwise it may choose this job as a candidate for migration. Therefore, before it performs location-dependent activities, a job must notify the Migration Server. The MGS library provides functions to avoid/enable migration. These functions may be used at language-implementation level to prevent migration when executing location-sensitive code. Note that a typical computation-intensive application (that is the most natural candidate for porting to millipede) will rarely need to use these functions explicitly.
As was shown in Section 3.2, jobs cannot synchronize using the operating system interface. Therefore the MGS provides a general mechanism for inter-mobile-job communication, or MJEC (for millipede Job Event Control), which is described in detail in [Itzkovitz et al., 1996b]. MJEC solves the problem of obtaining job locations by storing them in a shared array (that resides in the DSM). MJEC can be used to easily implement all the known synchronization protocols (semaphores, barriers, condition-variables, monitors, etc.) in a location-independent way. Together with some basic interface functions that are used for creating and managing jobs, the interface supplied by MJEC is very flexible and powerful. It is designed to support the convenient implementation of various parallel languages, where the implementation is independent of the operating system and of location issues (see [Itzkovitz et al., 1996b]).
The global design of millipede makes it possible for the MGS to transfer a job in an extremely simple way. The Migration Server of the sender instance suspends a job and if migration is enabled for this job, sends its worker id, context and contents of its stack to the Migration Server of the receiver instance; otherwise it resumes the job locally. The design ensures that the proper worker on the receiver instance is idle, and its stack resides in the same addresses as on the sender instance. Thus, the Migration Server of the receiver instance simply copies the stack of the job and its context to the proper worker and resumes it.
Figure 3: An example of the virtual memory of a MILLIPEDE application running on 3 hosts. Job 1 is executed on instance 0; Job 2 is executed on instance 2; worker N is free.
4.5 MILLIPEDE Daemons.
MILLIPEDE daemons are in charge of the dynamic load sharing. They collect and disseminate load information, identify idle workstations and distribute the MILLIPEDE applications over these machines.
A MILLIPEDE daemon consists of the following modules.
**Idle Detector.** This module checks if the host is idle, i.e., it is not used by its owner interactively, and the load caused by non-MILLIPEDE background processes is low. Each time the host becomes idle or becomes non-idle the Idle Detector notifies the Eviction Server.
**Eviction Server.** This module initiates and stops the eviction of foreign applications from the host. It uses information maintained by the Application Info Manager and notifies each local instance of a foreign application when it should start or stop the eviction of its local jobs.
**Application Info Manager.** The Application Info Manager collects administrative information about each MILLIPEDE application running on the host, such as its unique identifier, instance identifier, master host id, system process handle, and so on.
**Local Load Info Manager.** It collects load information about each application running on the host, such as number of local jobs and number of mobile local jobs. It also determines the global load state of the local host.
**Masters.** A separate manager called Master is created for each new application that was started locally. The master is in charge of distributing its application over the network.
**Master Info Manager.** This module collects master information about each application that was started locally, e.g., the status of the application on each host. It also collects global load information about all hosts in the system. The information maintained by this module is used only by masters, therefore the module is activated only if there exist applications that were started locally, so that there are masters running locally.
**Communication.** This module is used for communication with other Daemons and local Migration Servers of MILLIPEDE applications.
4.6 Parallelism vs. Communication in MILLIPEDE
There are several aspects of the parallelism-communication tradeoff in the MILLIPEDE system. MILLIPEDE supports running multiple applications simultaneously. Our objective is to compromise between maximum parallelism and minimum communication. Since different applications are not communicating, they should run on different hosts whenever possible. On the other hand, the communication between the components of the application should be minimized without causing load imbalance.
The control over the parallelism and the communication is done cooperatively by the Daemons and the Migration Servers in the following way. The Daemons distribute the applications over the network and determine the initial number of jobs of each application on each host. They strive to find an optimal assignment of hosts to applications, that is, to achieve sufficient load sharing using minimal number of application instances.
The mission of the Migration Servers is to optimize the communication within the application with respect to the decisions of the Daemons. That is, the Migration Servers try
to minimize the amount of communication caused by the DSM mechanism without breaking the load balance achieved by the Daemons.
The algorithms used by the Migration Servers in order to minimize the communication are described in [Schuster and Shalev (Wolfovich), 1997]. In the following section we describe in detail the algorithms used by the Daemons to distribute an application over the network.
5 Distributing an application
The daemons are in charge of determining the set of hosts for executing an application. The objective is to execute different applications on different hosts whenever possible, since different applications do not communicate, while the components of the same applications do. Therefore, if an underloaded host already runs a certain application, then the algorithm tries to send it additional jobs of the same application. Only if this is not possible or the host is not executing any application, a new application instance may be started on this host.
Each application is executed on a subset of the available hosts. Initially only the main copy of the application is created; if, as a result, the local host becomes overloaded, and the overall load of the system is sufficiently low, additional copies of the application are eventually created on underloaded hosts. The reverse process is initiated when the load decreases so that there exist more than a single underloaded host running the application. In this case two such hosts are chosen; the application copy of one of them is forced to migrate the jobs to the other copy and is disabled in order to make it possible for another application to use this machine.
5.1 Information Policy.
In order to achieve speedups, only idle or slightly loaded hosts should be used for remote execution. Therefore, certain amount of global information about the hosts' load state is needed to decrease the number of incorrect decisions. However, maintaining exact information about all hosts in the system is extremely expensive. Therefore each host reports to peers only significant changes in its load state. In addition, since the decisions on distributing an application are made by its master, only master hosts need the global load information. Thus, each host that becomes a master or stops being a master reports this change to all the hosts in the system; each host maintains a list of all master hosts and reports to them about all significant changes in its load state. Since coarse-grain applications are assumed, the state of the hosts is expected to change seldom, so that the overhead associated with this policy is relatively low.
5.1.1 Load indicator
Load state of a host is determined by two factors. The first is the CPU utilization by non-millipede processes and the presence of interactive work on the machine. In order to provide the user-ownership feature, millipede avoids using a host for remote execution if its native load (caused by non-millipede applications) is high or if the host is used for
interactive work. The second factor is the load caused by \texttt{MILLIPEDE} applications. Since we assume CPU-intensive applications, the load state of a host is determined by the total number of \texttt{MILLIPEDE} jobs running on the host. Other possible sources of information are:
- The number of jobs waiting for synchronization (e.g., when using ParC statements such as \texttt{sync}).
- The CPU time consumed by the jobs and by paging or migration of threads.
- Memory utilization.
- Network utilization.
Using these factors to determine the load would possibly provide better estimation. However, it would impose higher overhead, so that it would not necessarily improve the speedups. Since we concentrate on other issues in this research, the problem of load indexes is still open in \texttt{MILLIPEDE}.
\subsection*{5.1.2 Host states}
The state of a host is determined according to the two load factors described above. If the background load is too high or the machine’s owner is working interactively on it, the machine will be come \textit{evicting}. In this case, regardless of the second state component, namely, the load of the host, the machine is not used for remote execution (this may change in future implementations). However, if a \texttt{MILLIPEDE} application was started locally, it may still run on both the remote and the local machines; E.g., if the local machine is overloaded (i.e., there are too many local \texttt{MILLIPEDE} jobs), the masters of the local applications will try to migrate part of these jobs to other machines.
The host load state is determined according to the load that is caused by \texttt{MILLIPEDE} applications. Two thresholds – low and high – are used to evaluate the host load state. We call a host \textit{underloaded} if its load is below the low threshold, and \textit{overloaded} if the load is above the high threshold, and \textit{normal} otherwise. The thresholds are constant for each host and depend only on its hardware parameters, such as number of processors and their speed.
Since the algorithm strives to execute different applications on different hosts, the master hosts should also have some additional information. They should be able to determine which applications are running on which host, and whether they are enabled or disabled (as explained below). Thus, the state of a host is characterized also by the number and the type of applications that are using it.
\subsection*{5.1.3 Information dissemination.}
A new host receives the list of all the master hosts from a host that is chosen dynamically when the system comes up. It then sends the first state message to all the master hosts. Additional state updates are sent each time the host state (as defined above) changes. The state update message contains load of the host and the list of applications using it. For each such application the message contains its type (enabled or disabled) and the corresponding number of jobs. Note that a change in the number of jobs is not reported immediately.
Rather, it is reported with the regular state updates to all masters; in addition, when a particular application crosses a load threshold, an update is sent to the master host of this application.
The number and the type of applications running on a host are not expected to change frequently; only these changes and significant load changes are reported to the master hosts. In addition we assumed that the expected job lifetime is high and the number of workstations is not too large. Therefore the overhead imposed by the policy described above is relatively low. This overhead might be further reduced by using a filtering algorithm that prevents a daemon from sending unnecessary load updates when the host load state oscillates near one of the threshold values. This optimization is not yet implemented in millipede.
5.2 Master Protocol
We now describe the scheduling algorithm that the masters use for distributing their applications. This algorithm decides which, and how many, threads will be assigned to each host. The master common data structures (on a host) are updated each time the daemon receives a state update message. Then each master running on this host makes migration decisions regarding its application (if there are underloaded hosts in the system).
Let us describe the response of a master of application \(a\) which resides at host \(H_0\) to a state update message from a host \(H_1\). We denote the number of jobs belonging to the application \(a\) that run on the host \(H_j\) by \(n_j\). We denote the low and the high load thresholds of the host \(H_j\) by \(l_j\) and \(h_j\), respectively.
5.2.1 Treating an overloaded host
When the master receives a message from an overloaded host \(H_1\), it checks if that host is running the application \(a\). If not, it takes no further action. Otherwise the master tries to initiate a transfer of all or part of \(a\)'s jobs out of \(H_1\), depending on the number of these jobs (denoted \(n_1\)).
The master makes the decision in the following way. It determines the number of jobs to transfer (denoted \(n\)), which depends on \(n_1\) and possibly on other load parameters. If \(n_1 < l_1\), the master tries to evict \(a\) from \(H_1\), i.e., it attempts to transfer all jobs of \(a\) from \(H_1\). If the transfer succeeds, the master disables \(a\)'s application copy in \(H_1\). In this case \(n = n_1\). Otherwise it tries to transfer excess jobs from \(H_1\). In this case the exact number of jobs to be sent depends on both \(n_2\) and on load thresholds of the target host \(H_2\) (that is chosen as described below): \(n = \min\{n_1/2, \frac{l_2-h_2}{h_2}\}\).
The master then looks for an underloaded host that can receive the jobs. It may create an additional copy of \(a\) on some underloaded host if this is necessary. However, its objective is to avoid creating redundant instances and to avoid executing several different applications on the same host. The master therefore looks for a preferred underloaded host using the following precedence order.
1. The hosts that have only an enabled copy of \(a\), sorted by the number of the jobs of \(a\) in increasing order.
2. The hosts having an enabled copy of a and disabled copies of some other applications, sorted in the same way.
3. The hosts having an enabled copy of a and some other applications, sorted in the same way.
4. The hosts having a disabled copy of a and nothing else.
5. The hosts having a disabled copy of a and disabled copies of some other applications.
6. The hosts that do not have any copies.
7. The hosts having only disabled copies of other applications.
8. The hosts having enabled copies of other applications.
The master then asks the chosen host $H_2$ whether it can receive $n$ jobs. $H_2$ may refuse if someone else already offered it enough jobs, or if it has become underloaded recently, and it has no copy of a, but it has some copies of other applications. In the latter case it expects job offers from the masters of existing applications. If it remains underloaded for long enough without receiving such offers, it assumes that it will not receive jobs from the existing applications, and consequently it may agree to create a copy of an additional application.
If the underloaded host refuses, the master asks the next underloaded host. For a certain period of time the master keeps the information that the host refused to receive work, assuming the reasons for this do not frequently change. This allows the master to avoid sending useless work offers to such hosts at the next time it looks for an underloaded host.
If the selected underloaded host agrees to receive work, the master checks whether $H_1$ agrees to send the jobs out. The reason for this extra check is to avoid useless migrations in the case that several applications use $H_1$, and all the corresponding masters decide to send their jobs out of $H_1$. In this case, if there is no negotiation with $H_1$, these masters decisions may cause $H_1$ to become underloaded, and so immediately following the transfer of the jobs out of $H_1$ the master will start the process of collecting jobs back to $H_1$.
Consequently, the master asks $H_1$ if it can send $n$ jobs. The daemon on $H_1$ checks whether it still has sufficient number of jobs. If sending $n$ jobs will make it underloaded, it refuses; the master then cancels its offer to $H_2$. Otherwise it agrees; in this case the master sends $H_2$ a copy of a (or enables an existing one if needed) and sends $H_1$ a final request to send work. Upon receipt of this request the daemon on $H_1$ asks the Migration Server of a’s local copy to send $n$ jobs to $H_2$. The Migration Server selects the jobs with respect to their remote access history, striving to achieve maximal locality of DSM accesses. The general method for the selection process, as well as its implementation in millipede, is described in [Schuster and Shalev (Wolfovich), 1997]. Then the mgs transfers the selected jobs to $H_2$ and notifies the local daemon, which notifies the master. If the application is evicted from $H_1$, the local daemon also disables its copy.
5.2.2 Treating an underloaded host
When the master on $H_0$ receives a message from an underloaded host $H_1$, it looks for an overloaded host $H_2$ that has an enabled copy of $a$. If there exists such a host, the master tries to transfer the excess jobs from $H_2$ to $H_1$ in the same way as in the previous section.
Otherwise, all the hosts that are executing $a$ are in either underloaded or normal load state. The master checks if $H_1$ is running the application $a$. If it does, it tries to find another underloaded host $H_2$ that is executing $a$, attempting to merge the two copies into a single host (thus eliminating unnecessary communication between them). It decides which of the two hosts will receive the work from the other one, by using the same precedence order that was described in Section 5.2.1. The other host will evict $a$ to the chosen host. The rejection mechanism is used here too, meaning that both the sender and the receiver may reject the master request due to recent changes in their load state.
6 Performance evaluation
In this section we present the results of our experiments with the millipede system. We show that thread migration can be used to improve load balancing and to reduce the amount of communication. We executed the tests on six X86 Pentium workstations running the Windows-NT operating system, and connected by 100-Mbps Ethernet. The workstations have different amount of physical memory and different processor speed. The average latency of thread migration in this environment is 70ms, while the latency of a message of “zero” length (for example, an mjec message or a page lookup message) is 2ms.
6.1 TSP problem
The Traveling Salesman Problem (TSP) is an example of an NP-hard optimization problem in graph theory. Given a connected graph with weighted edges, the shortest Hamiltonian path should be found, i.e., a path traveling through all the nodes of the graph, so that the sum of the edge weights along the path is minimal. We give here a brief description of the parallel algorithm used to find an exact solution for this problem. Basically, the solution to the TSP problem is to scan a search tree having a node for each partial path of any Hamiltonian path which starts in a given node of the input graph, see Figure 4. More precisely, for a node representing the path $i_0 \rightarrow i_1 \rightarrow \cdots \rightarrow i_k$, its children are the nodes representing all paths of the form $i_0 \rightarrow i_1 \rightarrow \cdots \rightarrow i_k \rightarrow s$, where $s$ is different than $i_1, \cdots, i_k$. Thus, each leaf of the tree represents a Hamiltonian path in the input graph, and the objective is to find the one that represents the shortest path.
In the parallel algorithm work is divided among threads in the following way. For each node $0 \rightarrow i$ its subtree is searched by $k$ threads; the sons of the node are evenly divided between the threads, so that each thread receives a set of initial paths of the form $0 \rightarrow i \rightarrow j$, where its mission is to search for the minimal path in sub-trees of these paths. Each thread performs a DFS-style search in each of its subtrees; the search is exhaustive in the worst case. In order to optimize the search, all threads use a shared variable to store the weight of the shortest path; a thread cuts off the search in a certain subtree if the weight of the
partial path at the root of that subtree is greater than the weight of the shortest path that was found so far.
Each thread uses a certain amount of dynamically allocated memory. This memory should be allocated in the DSM to make thread migration possible. Depending on the DSM allocations size, some of the threads might get their memory on the same pages, which is a typical example of false sharing. We compared three different variants. In the first variation, denoted NO-FS, false sharing is avoided by allocating more memory than necessary (the allocations are padded to precisely fit into a page). In the other two variants, \( k \) threads that search the paths starting with \( 0 \rightarrow i \) store their private data on the same page. The variant called FS uses no optimizations, whereas the other one, called OPTIMIZED-FS, uses optimizations for data locality by enabling the histories mechanism.
6.1.1 Improving Load Balancing
Uniform input. We show now that migrating threads can improve performance even in cases that are trivially parallelizable. We compare execution time of the NO-FS variant of the TSP for two different scheduling strategies: static and dynamic. The TSP application receives a uniform input, i.e., all paths have almost the same length. Therefore all the threads have about the same amount of work, and also there is almost no communication between them. The static policy is a round-robin strategy: when \( n \) threads are to be created in a system of \( m \) machines, \( n/m \) sequential threads are created on each host. The dynamic policy is our load sharing strategy. Since the system is not uniform, thread migration improves performance by about 30% as one can see in Figure 5.
Evidently, one can suggest to improve the round-robin strategy by dividing the threads between the machines according to their respective performance. This might help, assuming that the expected execution time on each host can be accurately predicted. However, such prediction is only approximate, even if all threads have exactly the same amount of work.
The reason is that the prediction depends, in addition to the number of processors and their speed, on the amount of available physical memory, and on the behavior of other processes that are using the machines. Certainly, no improvements to the prediction help if the amount of work in the threads is not known in advance. As an example, we examine below an extreme case that cannot be treated by the static policy.
**Unpredictable computation amount.** Here we compare the static and dynamic policies applied to the TSP with extremely non-uniform input: the paths that are searched by the first six threads (among a total of 36 created threads) all have the same length, while the other paths begin with heavy edges, so the threads that search them terminate almost immediately due to pruning. Thus, when the static policy is applied in the system with at most six machines all jobs that do not terminate immediately will be scheduled to run on the same machine, while all other machines will become idle shortly after initiation. In contrast, the dynamic load sharing keeps all the machines utilized. Figure 6 shows that with the static policy there is no speedup at all, while the dynamic policy provides speedup close to linear.
### 6.1.2 Optimizing locality
Improper placement of communicating threads can impose huge communication overhead and significantly increase execution time. Table 7 summarizes the results of running the TSP algorithm on six machines with different numbers of threads contending for the same page. The table shows a dramatic reduction in the network traffic when the optimizations for locality are applied: the number of DSM-related messages, which reflects on the \( \text{miss-ratio} \) (ratio of remote to total number of accesses), drops by a factor of 30 to 40! Note that the number of extra messages that are added by the locality optimization mechanism itself is negligible compared to the improvement in the number of DSM-related messages.
### 7 Discussion
In this work we have shown that thread migration is one of the major capabilities of DSM systems. Systems that do not support migration of threads (or processes) may suffer bad performance due to load imbalance and due to improper placement of threads in the distributed environment.
Implementing preempted thread migration is not a simple task. Several approaches for implementing thread migration were introduced in the literature but generally they are inappropriate for most of the operating systems, and in some cases they are even incorrect. Because the portability issue in DSM systems is very important, it is vital to employ a solution that is guaranteed to be portable to various operating systems.
In this paper we proposed a correct system design for thread migration. We described the common approaches and discussed their advantages and flaws. Our proposed solution is the less demanding from the underlying operating system and thus is the most appropriate to be implemented on large varieties of operating systems, thus making the DSM system portable to many platforms.
In order to prove our design, we implemented it on the MILLIPEDE DSM system, under Windows-NT. The MILLIPEDE system allows better utilization of a network of single-processor and multiprocessor machines. It provides a simple interface for multithreaded concurrent programming on such a network, so that the network is viewed by the application as a single multiprocessor machine with shared memory.
Transparent thread migration is used in MILLIPEDE to provide dynamic load sharing while decreasing communication overhead by improving the locality of data accesses in an application-transparent way. In addition, migration is used to capture idle machines, and to provide user ownership on his personal machine by evicting remote threads and data from a machine when its user starts using it.
References
Figure 5. NO-FS TSP with uniform threads in non-uniform environment for static and dynamic scheduling.
Figure 6. NO-FS TSP with static and dynamic scheduling for extremely non-uniform input
Table 7. This table contains statistics regarding applying locality optimization to the TSP application running on 6 hosts with false sharing for different $k$ (number of jobs contending for a page). Applying locality optimizations decreases dramatically the number of DSM messages (page lookup and transfer). The added overhead imposed by ping-pong treatment mechanism and increased number of thread migrations is negligible.
<table>
<thead>
<tr>
<th>$k$</th>
<th>Optimized?</th>
<th>Number of Messages</th>
<th>Ping-pong Treatment Messages</th>
<th>Thread Migrations</th>
<th>Execution Time (sec)</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Yes</td>
<td>5100</td>
<td>290</td>
<td>68</td>
<td>645</td>
</tr>
<tr>
<td></td>
<td>No</td>
<td>176120</td>
<td>0</td>
<td>23</td>
<td>1020</td>
</tr>
<tr>
<td>3</td>
<td>Yes</td>
<td>4080</td>
<td>279</td>
<td>87</td>
<td>620</td>
</tr>
<tr>
<td></td>
<td>No</td>
<td>160460</td>
<td>0</td>
<td>32</td>
<td>1514</td>
</tr>
<tr>
<td>4</td>
<td>Yes</td>
<td>5060</td>
<td>343</td>
<td>99</td>
<td>690</td>
</tr>
<tr>
<td></td>
<td>No</td>
<td>155540</td>
<td>0</td>
<td>44</td>
<td>1515</td>
</tr>
<tr>
<td>5</td>
<td>Yes</td>
<td>6160</td>
<td>443</td>
<td>139</td>
<td>700</td>
</tr>
<tr>
<td></td>
<td>No</td>
<td>162505</td>
<td>0</td>
<td>55</td>
<td>1442</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/1996/LPCR/LPCR9603.pdf", "len_cl100k_base": 14137, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 73126, "total-output-tokens": 16654, "length": "2e13", "weborganizer": {"__label__adult": 0.00028061866760253906, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.0002627372741699219, "__label__education_jobs": 0.0008435249328613281, "__label__entertainment": 8.279085159301758e-05, "__label__fashion_beauty": 0.0001481771469116211, "__label__finance_business": 0.00021982192993164065, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.0006690025329589844, "__label__hardware": 0.001983642578125, "__label__health": 0.0004239082336425781, "__label__history": 0.0003592967987060547, "__label__home_hobbies": 0.00011050701141357422, "__label__industrial": 0.0005521774291992188, "__label__literature": 0.0002493858337402344, "__label__politics": 0.000274658203125, "__label__religion": 0.0005540847778320312, "__label__science_tech": 0.09112548828125, "__label__social_life": 8.022785186767578e-05, "__label__software": 0.01499176025390625, "__label__software_dev": 0.884765625, "__label__sports_fitness": 0.00029587745666503906, "__label__transportation": 0.0006465911865234375, "__label__travel": 0.0002419948577880859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73778, 0.02347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73778, 0.40603]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73778, 0.92802]], "google_gemma-3-12b-it_contains_pii": [[0, 1250, false], [1250, 4792, null], [4792, 7985, null], [7985, 11131, null], [11131, 14433, null], [14433, 18053, null], [18053, 20868, null], [20868, 23819, null], [23819, 27196, null], [27196, 30847, null], [30847, 33627, null], [33627, 36535, null], [36535, 36610, null], [36610, 39590, null], [39590, 43100, null], [43100, 43274, null], [43274, 46484, null], [46484, 49487, null], [49487, 52529, null], [52529, 55698, null], [55698, 58688, null], [58688, 62096, null], [62096, 64183, null], [64183, 67273, null], [67273, 69747, null], [69747, 71991, null], [71991, 72182, null], [72182, 73778, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1250, true], [1250, 4792, null], [4792, 7985, null], [7985, 11131, null], [11131, 14433, null], [14433, 18053, null], [18053, 20868, null], [20868, 23819, null], [23819, 27196, null], [27196, 30847, null], [30847, 33627, null], [33627, 36535, null], [36535, 36610, null], [36610, 39590, null], [39590, 43100, null], [43100, 43274, null], [43274, 46484, null], [46484, 49487, null], [49487, 52529, null], [52529, 55698, null], [55698, 58688, null], [58688, 62096, null], [62096, 64183, null], [64183, 67273, null], [67273, 69747, null], [69747, 71991, null], [71991, 72182, null], [72182, 73778, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73778, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73778, null]], "pdf_page_numbers": [[0, 1250, 1], [1250, 4792, 2], [4792, 7985, 3], [7985, 11131, 4], [11131, 14433, 5], [14433, 18053, 6], [18053, 20868, 7], [20868, 23819, 8], [23819, 27196, 9], [27196, 30847, 10], [30847, 33627, 11], [33627, 36535, 12], [36535, 36610, 13], [36610, 39590, 14], [39590, 43100, 15], [43100, 43274, 16], [43274, 46484, 17], [46484, 49487, 18], [49487, 52529, 19], [52529, 55698, 20], [55698, 58688, 21], [58688, 62096, 22], [62096, 64183, 23], [64183, 67273, 24], [67273, 69747, 25], [69747, 71991, 26], [71991, 72182, 27], [72182, 73778, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73778, 0.04237]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
6a04e6300a5ab2594466b7a2de4c2a2013355001
|
Towards a Substitution Tree Based Index for Higher-order Resolution Theorem Provers
Tomer Libal, Alexander Steen
To cite this version:
HAL Id: hal-01424749
https://hal.archives-ouvertes.fr/hal-01424749
Submitted on 2 Jan 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Towards a Substitution Tree Based Index for Higher-order Resolution Theorem Provers
Tomer Libal
Inria Saclay
Palaiseau, France
tomer.libal@inria.fr
Alexander Steen
Freie Universität Berlin
Berlin, Germany
a.steen@fu-berlin.de
Abstract
One of the reasons that forward search methods, like resolution, are efficient in practice is their ability to utilize many optimization techniques. One such technique is subsumption and one way of utilizing subsumption efficiently is by indexing terms using substitution trees. In this paper we describe an attempt to extend such indexes for the use of higher-order resolution theorem provers. Our attempt tries to handle two difficulties which arise when extending the indexes to higher-order. The first difficulty is the need for higher-order anti-unification. The second difficulty is the closure of clauses under associativity and commutativity. We present some techniques which attempt to solve these two problems.
1 Introduction
Term indexing is a popular technique for speeding up computations to a broad variety of tools in computational logic, ranging from resolution based theorem provers [17] to interpreters of programming languages such as Prolog. A key aspect for optimizing such tools is to use sophisticated methods for redundancy elimination. One such method for clause-based indexes is subsumption. Forward subsumption allows to discard clauses which are less general than clauses already processed, whereas backward subsumption allows to remove stored clauses if a more general one is encountered. The idea behind both is that using more general clauses drastically reduces the search space while still being enough for completing certain tasks [9]. Due to the importance of subsumption in practical applications, there is a variety of indexing techniques that support efficient subsumption queries. We refer the reader to the survey about term indexing by Ramakrishnan et al. [16].
In a similar way to tools for first-order logic, higher-order logic tools can benefit also from term indexing for efficient subsumption. Unfortunately, higher-order matching, which is a required technique for determining subsumption, is much more complex than its first-order counterpart [21]. We only know of few approaches to higher-order term indexing: Theiß and Benzmüller designed a term index [22] for the LEO-II higher-order resolution theorem prover [3]. However, their approach focuses on efficient low-level term (traversal) operations such as $\beta$-normalization and occur-checks. Additionally, term sharing is employed in order to reduce space consumption and for allowing constant-time equality checks between $\alpha$-equivalent terms.
The closest to our approach is the higher-order term indexing technique by Pientka [14]. Both approaches are based on substitution trees [6] and on propagating to the leaves those components of the indexed terms for
Copyright © 2016 for this paper by its authors. Copying permitted for private and academic purposes.
which there is no efficient unification algorithm. One difference is the assumed term representation: In order to efficiently manage more complex type systems, such as dependent types, Pientka has chosen to represent terms using Contextual Modal Type Theory [12]. This technique allows for an elegant treatment of dependent types but requires a specially designed unification procedure [15]. Our use of the standard simple type theory [4] allows us to use type-independent unification algorithms [2]. Another difference is the technique chosen for propagating the non-pattern content: While we use an efficient algorithm for computing the pattern generalization of two non-pattern terms [2] (also made possible by our choice of term representation), Pientka’s technique is based on a normalizing pre-processing step [15] which computes the non-pattern content as additional constraints and might incur an additional cost. A somewhat less crucial difference is the treatment of associativity and commutativity (AC). Since unification algorithms which incorporate this theory do not exist for higher-order logic, we deal with the problem by integrating the treatment of AC into the operations of the index. Pientka’s index primary target are not terms in clausal form and therefore, this problem is not treated there. It should not be too complex, though, to integrate the ideas presented in this paper in Pientka’s index in order to achieve the same AC treatment. Obvious other differences include the state of the art of Pientka’s index, its implementation within the Twelf system [13], and the experimental results which are included and its rigorous representation. Our index is yet to be implemented and experimented with and we are still to provide fully rigorous presentation. Nevertheless, we believe that our approach might be more suitable for indexing higher-order terms in general higher-order theorem provers. This claim is, of course, to be justified by the implementation and experimentation of both indexing techniques within Leo-III [24].
In this paper we hence present a higher-order indexing technique for the theorem prover Leo-III which is also based on substitution trees. The differences just discussed and especially the treatment of non-pattern terms might suggest that forward and backward subsumption operations can be efficiently handled by our approach.
The main difficulty which arises when trying to store clauses in an index is that the index must be closed under the AC properties of clauses. In addition, when computing subsumed clauses, the number of literals in the clause is not as important as the fact that each literal in the subsuming clauses must generalize a literal in the subsumed one. As long as we only treat unit clauses, no special treatment is required and the technique presented in [7] can be safely extended to deal with higher-order terms. When dealing with multi-literal clauses, though, one has to compensate between optimizing the size of the index and optimizing the operations over the index. As can be seen in [23], one cannot avoid an expensive backtracking search.
We suggest a different approach which is supposed to take advantage on searching the index in parallel. In order to achieve that, we plan to store each literal independently in the index and on subsumption calls, to retrieve and compare the literals in parallel. Due to the fact that Leo-III is based on a multi-agent architecture [20], we hope that such an approach would be efficient in practice.
Since we are using substitution trees, we are still faced with the problem of using the costly higher-order unification and anti-unification procedures. In the presented work we somehow avoid this problem by using a variant of higher-order anti-unification [2] which computes pattern [11] substitutions. The use of this algorithm will allow us to maintain a substitution tree all of whose inner nodes are pattern substitutions, on which unification and anti-unification are efficient.
The definitions and properties of our index are still being investigated and some of them are not yet formally proved. Nevertheless, we hope that the arguments and examples will convince the reader about the potential of our approach for the indexing of arbitrary higher-order clauses and for the support of the forward and backward subsumption functions over this index.
In the next section we present the necessary definitions required for understanding this paper as well as the basic ideas of substitution trees and higher-order anti-unification. Following this section is the main part of the paper, in which we introduce our notion of higher-order substitution trees and define the insert, delete, retrieve and subsumption functions. We close our paper with a conclusion which also describes potential future work.
2 Preliminaries
In this section we present the logical language that is used throughout the paper. The language is a version of Church’s simple theory of types [4] with an $\eta$-conversion rule as presented in [1] and with implicit $\alpha$-conversions. Unless stated otherwise, all terms are implicitly converted into $\beta$-normal and $\eta$-expanded form.
Let $\Sigma_o$ be a set of basic types, then the set of types $\Sigma$ is generated by $\Sigma := \Sigma_o \mid \Sigma \to \Sigma$. Let $\mathcal{C}$ be a signature of function symbols and let $\mathcal{V}$ be a countably infinite set of variable symbols. In our definitions and examples the symbols $u, w, x, y, z, W, X, Y, Z \in \mathcal{V}$, and $f, g, h, k, a, b, c \in \mathcal{C}$ are used. We sometimes use subscripts and
Superscripts as well. The set $\text{Term}^\alpha$ of terms of type $\alpha$ is generated by $\text{Term}^\alpha := f^\alpha \mid x^\alpha \mid (\lambda x^\beta.\text{Term}^\gamma) \mid (\text{Term}^\beta \to \text{Term}^\beta)$ where $f \in \mathfrak{C}, x \in \mathfrak{V}$ and $\alpha \in \mathfrak{T}$ (in the abstraction, $\alpha = \beta \to \gamma$). Applications throughout the paper will be associated to the left. We will sometimes omit brackets when the meaning is clear. We will also normally omit typing information when it is not crucial for the correctness of the results. $\tau(t^\alpha) = \alpha$ refers to the type of a term. The set $\text{Term}$ denotes the set of all terms. $\text{positions}$ are defined as usual. We denote the subterm of $t$ at position $p$ by $t_p$. $\text{Bound}$ and $\text{free}$ variables are defined as usual. Given a term $t$, we denote by $\text{hd}(t)$ its head symbol.
Substitutions and their composition ($\circ$) are defined as usual, namely $(\sigma \circ \theta)(X) = \theta(\sigma(X))$. The domain and codomain of a substitution $\sigma$ are denoted by $\text{dom}(\sigma)$ and $\text{codom}(\sigma)$. The image of $\sigma$ is the set of all variables in $\text{codom}(\sigma)$. We denote by $\sigma|_{W}$ the substitution obtained from substitution $\sigma$ by restricting its domain to variables in $W$. We denote by $\sigma[X \mapsto t]$ the substitution obtained from $\sigma$ by mapping $X$ to $t$, where $X$ might already exist in the domain of $\sigma$. The join of two substitutions $\sigma$ and $\theta$ is denoted $\sigma \bullet \theta$ (cf. [7]). We extend the application of substitutions to terms in the usual way and denote it by postfix notation. Variable capture is avoided by implicitly renaming variables to fresh names upon binding. A substitution $\sigma$ is more general than a substitution $\theta$, denoted $\sigma \leq \theta$, if there is a substitution $\delta$ such that $\sigma \circ \delta = \theta$. Similarly, a substitution $\sigma$ is the most specific generalization of substitutions $\tau$ and $\theta$ if $\sigma \leq \tau, \sigma \leq \theta$ and there is no other substitution $\delta$ fulfilling these properties such that $\delta \geq \sigma$. A substitution $\sigma$ matches a substitution $\tau$ if there is a substitution $\delta$ such that $\delta \circ \tau = \sigma$. A complete set of matchers between substitutions $\sigma$ and $\tau$ is a set $\mathcal{A}$ of substitutions such that $\mathcal{A}$ contains all the matching substitutions between $\sigma$ and $\tau$. A substitution $\sigma$ is a renaming substitution if $\text{codom}(\sigma) \subseteq \mathfrak{V}$ and $|\text{codom}(\sigma)| = |\text{dom}(\sigma)|$. The predicate $\text{rename}(\sigma)$ is true iff $\sigma$ is a renaming substitution. We denote the inverse of a renaming substitution $\sigma$ by $\text{inverse}(\sigma)$.
2.1 Substitution Trees
This section describes substitution trees based on those defined in [6]. In order to optimize some functions on trees, the definition in [6] uses normalized terms and substitutions. As we will see, we will insert the literals of a clause independently into the index and therefore, if we normalize them as suggested in [6] the relationship between the free variables among the different literals will be lost. The main differences, therefore, between our presentation and that in [6] is that we will avoid normalizing terms and substitutions and in addition, allow terms to be of arbitrary order.
Definition 1 (Substitution Trees). A substitution tree is defined inductively and is either the empty tree $\epsilon$ or the tuple $(\sigma, \Pi)$ where $\sigma$ is a substitution and $\Pi$ is a set of substitution trees such that
1. each node in the tree is either a leaf node $(\sigma, \emptyset)$ or an inner node $(\sigma, \Pi)$ with $|\Pi| \geq 2$.
2. for every branch $(\sigma_1, \Pi_1), \ldots, (\sigma_n, \Pi_n)$ in a non-empty tree we have $\text{dom}(\sigma_1) \cap (\text{dom}(\sigma_1) \cup \cdots \cup \text{dom}(\sigma_{i-1})) = \emptyset$ for all $0 < i \leq n$.
2.2 Higher-order Anti-unification
Anti-unification denotes the problem of finding a generalization $t$ of two given terms $t_1$ and $t_2$, i.e. a term $t$ such that there exist substitutions $\sigma_1, \sigma_2$ such that $t_1 = t \sigma_1$ and $t_2 = t \sigma_2$. A key algorithm for the procedures which are described in the remainder of this paper is the higher-order anti-unification algorithm of Baumgartner et al. [2]. This algorithm differs from most higher-order unification and anti-unification procedures not only by being applicable to arbitrary (simply typed) higher-order terms, but also by efficiently computing very specific generalizations.
By very specific generalization (in contrast to the most specific one) we here mean the most specific higher-order pattern which generalizes two arbitrary higher-order terms. This pattern, however, might not be the most specific generalization of these two higher-order terms. Higher-order patterns are restricted forms of higher-order terms for which it is known that efficient unification algorithms exist [11].
Details about the higher-order pattern fragment or even the above anti-unification algorithm are not crucial for understanding this paper and are therefore omitted. It is important to note that since only most specific pattern generalizations are found, the size of the index described in this paper is not optimal. We will explain this point in more detail later.
The anti-unification algorithm of Baumgartner et al. is subsequently denoted by $\text{msg}^*$.
**Definition 2** (Algorithm msg**∗** [2]). The algorithm msg**∗** takes two arbitrary higher-order terms t₁ and t₂ as input and returns a higher-order pattern s and substitutions σ₁ and σ₂ such that
1. sσ₁ = t₁ and sσ₂ = t₂, and
2. there is no other higher-order pattern s′ fulfilling the above property such that there is a non-trivial substitution δ where sδ = s′.
Baumgartner et al. showed that the algorithm msg**∗** computes a unique solution (up to renaming of free variables) and takes cubic time [2]. In this paper we are interested in a variant of this algorithm which computes substitutions, rather than terms and which is defined as follows:
**Definition 3** (Most specific pattern generalizing substitution). Given substitutions θ₁ and θ₂, the substitution σ is the most specific pattern generalizing substitution if there are substitutions τ₁ and τ₂ such that codom(σ) contains only higher-order patterns, σ ◦ τ₁ = θ₁, σ ◦ τ₂ = θ₂ and there is no substitution σ′ > σ fulfilling these properties.
An algorithm for computing the most specific pattern generalizing substitution, denoted msg, can be defined on top of msg**∗**.
**Definition 4** (The algorithm msg). The algorithm msg takes two substitutions θ₁, θ₂ as input and returns a triple (σ, τ₁, τ₂) with σ, τ₁, τ₂ as in Def. 3. To that end, let dom(θ₁) ∪ dom(θ₂) = {x₁, ..., xₙ}. Let (f(s₁, ..., sₙ), τ₁, τ₂) = msg**∗**(f(θ₁(x₁), ..., θ₁(xₙ)), f(θ₂(x₁), ..., θ₂(xₙ))) where f is a new function symbol of arity n. Finally, set σ := {x₁ ↔ s₁, ..., xₙ ↔ sₙ}.
**Claim 5.** Let θ₁, θ₂ be two substitutions and let (σ, τ₁, τ₂) = msg(θ₁, θ₂). Then σ is a most specific pattern generalizing substitution of θ₁ and θ₂. Also, σ is unique up to renaming of free variables.
The algorithm msg takes cubic time, hence can be used to efficiently build up a substitution tree index (cf. next section).
### 3 Substitution Trees for Higher-order Clauses
In this section we will describe some modifications to first-order substitution trees which will allow us to extend them to higher-order terms.
The most obvious obstacle to extending the trees to higher-order terms is the fact that substitution trees depend on procedures for unification, anti-unification and matching. These procedures, while being both relatively efficient and unitary in classical first-order logic [10], are highly complex in higher-order logics and do not possess unique solutions any more [8, 19].
Another obstacle is the fact that since we are targeting resolution theorem provers, the terms we are going to store, retrieve, delete and check for subsumption are not mere syntactic terms but clauses with are closed under AC. In the first-order case, one can use dedicated unification algorithms which, although not unitary any more [5], are still feasible. In the higher-order case, due to the complex nature of even the syntactic unification procedure, one needs to find another approach.
Our solution to the first problem is to relax a core property of substitution trees and allow also non least general generalizations as substitutions in the nodes of the trees. Towards this end, we employ the anti-unification algorithm from section 2.2. The use of this algorithm will render our trees less optimal as nodes may now contain more general substitutions than necessary and therefore one child may be more general than another child of a node. On the other hand, the algorithm is only cubic in time complexity and is unitary.
Our approach to the second problem is to handle the AC properties of clauses not on the anti-unification or matching level, but to encode their treatment into the retrieval, insertion, deletion and subsumption functions. We obtain this by regarding each literal of a clause as an independent higher-order term and, in addition, assigning labels that are identical for all literals of the same clause.
Classical substitution trees depend on the anti-unification algorithm for treating associativity and commutativity as well as other properties required by subsumption, such as one clause being a sub-clause of the other. If such an anti-unification algorithm for higher-order term can be found, a simple extension to the trees in section 2.1 can be defined which enjoys the same definitions for insert, delete and retrieval as defined in [6]. This extension will also preserve the property of substitution trees that the index does not contain variants of substitutions already stored and the deletion function removes all variants of some input substitution.
will need to know the actual substitutions at these leaves, we introduce the notation of composed substitutions after the insertion of the first four clauses.
Fig. 1 displays the higher-order substitution tree that clause (6) subsumes clauses (3), (4) and (5). We will use this clause set to demonstrate the insert, retrieval and delete operations of the higher-order substitution trees. Here, \( x \) which generalizes of using arbitrary higher-order terms in the inner nodes of the tree as there are many possible substitutions.
From now on we will refer to higher-order substitution trees just as substitution trees.
The above definition makes retrieval in the tree much more efficient as the traversal of the tree will always simplify the insertion operation as the root node will always be more general than any inserted term.
Our trees will always have a root node \( \{ x_0 \vdash x_1 \} \) and will therefore never be empty. This is done in order to simplify the insertion operation as the root node will always be more general than any inserted term.
Example 7. We use as running example, the manipulation of the index as done by Leo-III [24] when running on a variant of Cantor’s surjective theorem. The first six clauses which are inserted are the following (where \( \alpha \) and \( \beta \) are types, \( \alpha := \iota \rightarrow \beta \) and \( \beta := \iota \rightarrow \alpha \)):
\[
\begin{align*}
(1) &= (a^{\alpha} (b^{3 \rightarrow \iota} u^2)) u^2 \\
(2) &= (a^{\alpha} (b^{3 \rightarrow \iota} u_1^3)) u_2^2 (u_1^3 u_2^2) \\
(3) &= \neg (u_1^3 u_2^2) \lor (a^{\alpha} (b^{3 \rightarrow \iota} u_1^3) u_2^2) \\
(4) &= (u_1^2 u_2^2) \lor \neg (a^{\alpha} (b^{3 \rightarrow \iota} u_1^3) u_2^2) \\
(5) &= (a^{\alpha} (b^{3 \rightarrow \iota} u_1^3) u_2^2) \lor \neg (a^{\alpha} (b^{3 \rightarrow \iota} u_1^3) u_2^2) \\
(6) &= (u_1^2 u_2^2) \lor \neg (u_1^3 u_2^2)
\end{align*}
\]
Here, \( \lor \) denotes the union of single literals. Note that clause (5) is subsumed by both clauses (3) and (4) and that clause (6) subsumes clauses (3), (4) and (5). We will use this clause set to demonstrate the insert, retrieval and delete operations of the higher-order substitution trees. Fig. 1 displays the higher-order substitution tree after the insertion of the first four clauses.
The above example demonstrates why our trees are not optimized as for example, one of the children of node \( x_0 \vdash x_1 \), the node \( x_1 \vdash a (b u_1) u_2 \) is less general than the child \( x_1 \vdash u_1 u_2 \). Also, it demonstrates the problem of using arbitrary higher-order terms in the inner nodes of the tree as there are many possible substitutions which generalizes \( x_1 \vdash u_1 u_2 \) to \( x_1 \vdash a (b u_1) u_2 \), but only one most specific pattern generalization, \( x_0 \vdash x_1 \).
In our trees, each branch from the root of the tree to a leaf corresponds to a literal of some clause. Since we will need to know the actual substitutions at these leaves, we introduce the notation of composed substitutions:
We will define now how to insert new elements into the tree. The definition is similar to the insertion function for τ.
For example, in Fig. 1, consider the leftmost leaf labeled (1). Here, the composed substitution is given by \( \{x_0 \mapsto x_1\} \cdot \{x_1 \mapsto a = x_2 = x_3\} \cdot \{x_2 \mapsto b \mapsto a = (b \mapsto u)\} = \{x_0 \mapsto a = (b \mapsto u)\} \). Note that, in the context of substitution trees, \( \sigma \circ \theta \) equals \( \sigma \circ \theta_{dom(\sigma)} \) if \( \sigma \) and \( \theta \) are substitutions on a path from the root node to a leaf where \( \theta \) occurs directly below (i.e. as a child node of) \( \sigma \).
### 3.1 Insertion
We will define now how to insert new elements into the tree. The definition is similar to the insertion function defined in [6]. One difference is the use of \( \text{msg} \) for both finding variants and computing generalizations as well as adding labels to the leaves of the tree. An even more important difference is that since we store clauses and not terms, we must also store in the tree variants of existing nodes.
Given a clause labeled by \( l \) for insertion, we insert each literal \( t \) of the clause separately. In the following algorithm, we insert to the tree the substitution \( \tau = \{x_0 \mapsto t\} \).
**Definition 8** (Composed Substitutions). Given a substitution tree \( T = (\sigma, \Pi, L) \) and let \( \tau \) be a substitution at a leaf of \( T \) such that the nodes on the branch from the root of \( T \) to \( \tau \) are \((\sigma_1, \ldots, \sigma_n)\). Then, the composed substitution for \( \tau \), denoted \( \text{comp-sub}(\tau) \), is given by \( \text{comp-sub}(\tau) = \sigma_1 \circ \ldots \circ \sigma_n \).
**Definition 9** (Insertion Function \( \text{insert} \)). Let \((\sigma, \Pi, L)\) be a substitution tree, \( \tau \) a substitution to be inserted and \( l \) the clause label of this substitution. Compute the following set \( A = \{ (\theta_i, \delta_i^1, \delta_i^2) \mid (\sigma_i, \Pi_i, L_i) \in \Pi, (\theta_i, \delta_i^1, \delta_i^2) = \text{msg}(\sigma_i, \tau) \} \). Then, \( \text{insert}((\sigma, \Pi, L), \tau, l) = (\sigma', \Pi', L) \) where:
- (Variant) if there exists \((\theta, \delta^1, \delta^2) \in A\) such that \( \delta^1 \) is a renaming, then \( \Pi' = \Pi \setminus \{(\sigma_i, \Pi_i, L_i)\} \cup \{\text{insert}((\sigma_i, \Pi_i, L_i), \text{inverse}(\delta^1) \circ \delta^2, l)\} \).
- (Compatible) otherwise, if there exists \((\theta_i, \delta_i^1, \delta_i^2) \in A\) such that \( \text{codom}(\theta_i) \) contains non-variable terms, then \( \Pi' = \Pi \setminus \{(\sigma_i, \Pi_i, L_i)\} \cup \{(\theta_i, (\delta_i^1, \Pi_i, L_i), (\delta_i^2, \emptyset, \{l\}))\} \).
- (Non compatible or empty) otherwise, let \((\theta, \delta^1, \delta^2) = \text{msg}(\sigma, \tau)\) and \( \Pi' = \Pi \cup \{(\delta^1, \emptyset, \{l\})\} \).
**Example 10.** Assume we want to insert the substitution \( \tau = \{x_0 \mapsto a\} \) for clause (1) into the tree \( (\{x_0 \mapsto x_1\}, \Pi) \).
- if \( \Pi = \emptyset \), then we get the tree in Fig. 2c where \( \Pi \) is empty.
- if \( \Pi = \{(x_1 \mapsto f(y)), \emptyset, \{(2)\}\} \cup \Pi' \), then we have a variant node since \( \text{msg}(\{x_1 \mapsto f(y)\}, \tau) = (\{x_1 \mapsto f(y)\}, \text{id}, \{y \mapsto a\}) \) and we get the tree in Fig. 2a.
- if there is no variant but \( \Pi = \{(x_1 \mapsto f(b)), \emptyset, \{(2)\}\} \cup \Pi' \), then we have a compatible node since \( \text{msg}(\{x_1 \mapsto f(b)\}, \tau) = (\{x_1 \mapsto f(x_2)\}, \{x_2 \mapsto b\}, \{x_2 \mapsto a\}) \) and \( \text{codom}(\{x_1 \mapsto f(x_2)\}) \) contains non variable symbols and we get the tree in Fig. 2b.
- if there is also no compatible child, we get the tree in Fig. 2c.
The way we insert substitutions into the tree preserves the tree being a substitution tree.
**Claim 11.** If \( T \) is a higher-order substitution tree and \( \tau \) a substitution, then \( \text{insert}(T, \tau, l) \) is also a higher-order substitution tree.
The above property will make retrieval in the tree much more efficient as the traversal of the tree will always use the algorithm \text{msg} and not less efficient algorithms.
We show next how to insert the clauses of our running example.
\textbf{Example 12.} Figures 3, 4 and 1 show the state of the index after consecutive insertion of clauses (1) and (2), (3), and (4), respectively.
Note that, in the above tree, it can be seen why the tree is not optimal when using \text{msg}. Although the nodes having \( a \ (b \ u_1) \ u_2 \) are instances of \( u_1 \ u_2 \), the tree does not capture it and creates two separate nodes. This happens because \( u_1 \ u_2 \) is not in the pattern fragment.
\subsection*{3.2 Deletion}
While the insert function defined in the previous section did not differ much from the definition in [6], our definition of the deletion function is completely different. Deletion in first-order substitution trees serves as a logical operation and can be used to perform some limited backward subsumption. Since we store the literals of a clause independently in the tree, we need more information before we can decide if a substitution can be deleted. We will therefore define the deletion function as an optimization function which will remove from the index certain labels of clauses. Since such a deletion can leave some leaves of the tree without labels, we need to recursively optimize the tree by removing and merging nodes.
The formal definition of the deletion function \text{del} is given by:
\textbf{Definition 13 (Deletion Function \text{del}).} Given a substitution tree \( T = (\sigma, \Pi, L) \) and a clause to be removed labeled by \( l \), the function \text{del}(T, l) is defined as follows:
1. Let \( \Pi' = \bigcup_{T' \in \Pi} \text{del}(T', l) \).
2. if \( \sigma = \{ x_0 \mapsto x_1 \} \) (root) return \( (\sigma, \Pi', L \setminus \{l\}) \).
3. else if \( \Pi' = \emptyset \) and \( L = \{ l \} \), then return \( \epsilon \).
• else if \( \Pi' = \{ (\sigma', \Pi'', L'') \} \) and \( |L \setminus \{ \ell \}| = 0 \), then return \( (\sigma \circ \sigma', \Pi'', L'') \).
• else return \( (\sigma, \Pi', L \setminus \{ \ell \}) \).
**Example 17.** After deleting clauses (3) and (4) from the tree in Fig. 1, the result is the tree in Fig. 3b.
### 3.3 Retrieval
Retrieval in substitution trees is used in order to retrieve all substitutions with a specific relation to some input substitution. Since we are interested in forward and backward subsumption, the relations we are interested in are for the input substitution to be less and more general than the substitutions in the tree, respectively.
In order to support associativity and commutativity of clauses, our substitution trees use a non-standard indexing mechanism where each literal is being stored independently from the other literals of the clause. This will prompt us, for each subsumption call, to try to retrieve all substitutions in the tree with a specific relation to all literals of an input clause. Since all retrieve calls can be done in parallel, Leo-III, with its multi-agent architecture, can take advantage on this approach to the associativity and commutativity of higher-order clauses. It should be noted that since we consider the literals of a clause separately, but a subsumption check requires a common substitution to be applicable to the clause, we need to gather, in addition to the labels, all substitutions that denote the relationship between the literals in the index and the literals of the input clause.
One property of higher-order terms cannot be avoided. In order to retrieve substitutions, a matching algorithm must be used. We have avoided its use when inserting elements by using the anti-unification algorithm from Sec. 2.2. This algorithm will also allow us to traverse the tree when retrieving substitutions and reach a possible matching node according to the definition of substitution trees. The last action of the retrieval operation, the actual matching of a node with the input substitution, requires a stronger algorithm than \( \text{msg} \). For this operation we will use a standard higher-order matching algorithm. Since the call to this algorithm is performed at most once for each stored substitution and since incomplete higher-order matching algorithms can perform very well in practice, we hope that this step will not impair much the efficiency of the trees. The use of incomplete algorithms is not essential here as a failure to match two substitutions when checking for subsumption might only increase the size of the substitution tree.
In the following, we will assume being given a (possibly incomplete) matching algorithm. Such an algorithm can be, for example, based on Huet’s pre-unification procedure [8] with bounds on the depth of terms.
**Definition 15 (Matching Algorithm \text{match}.)** Given two substitutions \( \sigma \) and \( \tau \), then \( \text{match}(\sigma, \tau) = M \) where \( M \) is a complete set of matchers between \( \sigma \) and \( \tau \).
We now describe the two supported retrieve calls, which will both return a set of labels corresponding to an input substitution. Each label will be associated to a set of substitutions. The first of such functions returns all labels of substitutions which are more general than the input argument. Intuitively, this function traverses the tree and uses \( \text{msg} \) for checking if the input substitution is a variant of the respective node. A formal definition is given by
**Definition 16 (Retrieval Function \text{g-retrieve}.)** Given a substitution tree \( T = (\sigma, \Pi, L) \) and a substitution \( \tau \), \( \text{g-retrieve}(T, \tau, \tau') \) returns a set of labels associated with substitutions, defined inductively as follows (\( \tau' = \tau \) at the initial call, but it may change during traversal):
• if \( \Pi = \emptyset \) and \( M = \text{match}(\tau, \text{comp-sub}()) \) such that \( M \) is not empty, then return \( (L, M) \).
• otherwise, return \( \{(L, \text{match}(\tau, \text{comp-sub}()))\} \cup \{ ext{g-retrieve}((\sigma, \Pi', L'), \tau, \delta_2) \mid (\sigma, \Pi', L') \in \Pi, \text{msg}(\sigma', \tau') = (\theta, \delta_1, \delta_2), \text{rename}(\delta_1)\} \cup \{ ext{g-retrieve}((\sigma', \emptyset, L'), \tau, \_ \) \mid \( (\sigma', \emptyset, L') \in \Pi \} \).}
**Example 17.** Let the substitution tree \( T \) be that from Fig. 1 and let \( \tau = \{x_0 \mapsto (a \ (b \ u_1) \ u_2)\} \). The execution of \( \text{g-retrieve}(T, \tau) \) proceeds as follows:
• apply \( \text{msg} \) on all children of the root in order to find either a variant or a leaf, and obtain the two inner nodes \( \{(x_1 \mapsto (a \ (b \ u_1) \ u_2)), \emptyset, \{(3)\}\} \) and \( \{(x_1 \mapsto (u_1 \ u_2)), \emptyset, \{(4)\}\} \).
• recursively apply \( \text{g-retrieve} \) on the two leaves.
• now apply $\text{match}$ on the two composite substitutions $\{x_0 \mapsto (a \ (b \ u_1) \ u_2)\}$ and $\{x_0 \mapsto (u_1 \ u_2)\}$, in order to obtain the two sets $\{\delta_1 = \{u_1 \mapsto u_1, u_2 \mapsto u_2\}\}$ and $\{\delta_2 = \{u_1 \mapsto \lambda z. (a \ (b \ u_1) \ u_2), \ldots\}\}$.
• for each leaf, since the result is not empty, the function returns the sets $\{((3), \{\delta_1\})\}$ and $\{((4), \{\delta_2, \ldots\})\}$.
Note that the set of matchers can contain more than one element and even be infinite. We hope to optimize this function in the future.
Claim 18. The set returned by $g\text{-retrieval}(\sigma, \Pi, L, \tau)$ contains all the labels of substitutions $\theta$ which are stored in $(\sigma, \Pi, L)$ and which are more general than $\tau$. In addition, if a substitution $\delta$ is associated with the label of $\theta$, then $\delta \circ \theta = \tau$.
The second retrieval function returns all substitutions which are less general than the input substitution. The tree is still traversed using the function $\text{msg}$ but this time we will not be able to use $\text{msg}$ to check if $\sigma$ is a variant of $\tau$. This is due to the fact that we used $\text{msg}$ to check if $\tau$ is a variant of $\sigma$ by checking if $\delta_1$ is a renaming substitution for $\text{msg}(\sigma, \tau) = (\theta, \delta_1, \delta_2)$. This worked as both $\sigma$ and $\theta$ are pattern substitutions. If we try to check whether $\sigma$ is a variant of $\tau$, since $\tau$ might not be a pattern substitution, $\delta_2$ might not be a renaming substitution even if it is a variant. On the other hand, if $\sigma$ is a variant of $\tau$, then all the nodes in the subtrees of $\sigma$ are variants of $\tau$, so we need to use $\text{match}$ only when we reach a node of which $\tau$ is not a variant and stop there.
We first introduce an utility function which gathers all literals on the leaves of a tree.
Definition 19 (Gathering of labels). Given a substitution tree $T = (\sigma, \Pi, L)$ and a substitution $\tau$, $\text{gather}(T, \tau) = (L, \text{match}(\text{comp-sub}(\sigma), \tau)) \cup \{\text{gather}(T', \tau) \mid T' \in \Pi\}$.
Example 20. The application of $\text{gather}(T, \tau)$ where $T$ is the last child of the root in Fig. 1 and $\tau = \{x_0 \mapsto (u_1 \ u_2)\}$ proceeds as follows:
• since there are no labels in the node, it proceed recursively on the two children.
• the application on the first child returns $\{((3), A)\}$, where $A = \{u_1 \mapsto \lambda z. \neg (u_3 \ u_2), \ldots\}$ is the result of $\text{match}$ on the composite function $\{x_0 \mapsto \neg (u_3 \ u_2)\}$ and $\tau$.
• the application on the second child returns $\{((4), A)\}$, where $A = \{u_1 \mapsto \lambda z. \neg (a \ (b \ u_3) \ u_2), \ldots\}$ is the result of $\text{match}$ on the composite function $\{x_0 \mapsto \neg (a \ (b \ u_3) \ u_2)\}$ and $\tau$.
• return the union of these two sets.
Definition 21 (Retrieval Function $i\text{-retrieval}$). Given a substitution tree $T = (\sigma, \Pi, L)$ and a substitution $\tau$, $i\text{-retrieval}(T, \tau, \tau')$ returns a set of labels associated with substitutions, defined inductively as follows ($\tau' = \tau$ at the initial call, but it may change during traversal):
• if $\text{msg}(\sigma, \tau') = (\theta, \delta_1, \delta_2)$ such that either $\delta_1$ is not a renaming or $\delta_2$ is a renaming, and $\text{match}(\text{comp-sub}(\sigma), \tau) = M$ such that $M$ is not empty, then return $(L, M) \cup \{\text{gather}(T, \tau) \mid T \in \Pi\}$.
• otherwise, if $\delta_1$ is a renaming, return $\{i\text{-retrieval}(T', \tau, \delta_2) \mid (T') \in \Pi\}$.
• otherwise, return $\emptyset$.
Example 22. Let the substitution tree $T$ be the one from Fig. 1 and $\tau$ be the substitution $\{x_0 \mapsto (u_1 \ u_2)\}$. The function $i\text{-retrieval}(T, \tau)$ proceeds as follows:
• we first calculate $\text{msg}(\{x_0 \mapsto x_1\}, \tau) = (x_0 \mapsto x_1, \text{id}, \{x_1 \mapsto (u_1 \ u_2)\})$.
• since id is a renaming, we recursively apply $i\text{-retrieval}$ on all nodes.
• for all children, the first case of $i\text{-retrieval}$ now holds and we start gathering all the labels in the tree.
For brevity, we give the example only for the last node:
• as mentioned, the first case now holds for this node as $\text{msg}(\{x_1 \mapsto \neg x_2\}, \{x_1 \mapsto (u_1 \ u_2)\}) = \{(x_1 \mapsto x_3), (x_3 \mapsto \neg x_2), (x_3 \mapsto (u_1 \ u_2))\}$ and therefore $\delta_1$ is not a renaming. On the other hand, $\text{match}(\{x_0 \mapsto \neg x_2\}, \{x_0 \mapsto (u_1 \ u_2)\}) = \{(u_1 \mapsto \lambda z. \neg x_2), \ldots\}$ is not empty.
• gather all labels on this node, as shown in Ex. 20.
Claim 23. The set returned by **i-retrieve**(*σ*, *Π*, *L*), *τ* contains all the labels of substitutions which are stored in (*σ*, *Π*, *L*) and which are less general than *τ*. In addition, if a substitution *δ* is associated with the label of *θ*, then *δ* ◦ *τ* = *θ*.
We now show how the checks for forward and backward subsumptions of clauses can be implemented using **g-retrieve** and **i-retrieve**.
### 3.3.1 Forward Subsumption
Forward subsumption checks if an input clause is subsumed by a clause in the index. In case the clause is subsumed, no change to the index is made and the input clause is not inserted into the index.
**Definition 24** (Subsumption). A clause *c* is subsumed by a clause *d* if |*c*| ≤ |*d*| and there is a substitution *σ* such that for each literal *l* of *d* there is a literal *l ′* of *c* such that *l* |*σ* = *l ′*. Here, |*c*| denotes the number of literals in *c*.
Our way of treating associativity and commutativity means that for each literal of the input clause we gather all more general literals in the index with their associated substitution sets. These sets contains all substitutions which independently match the literals in the tree to the literals of the input clause. In order to detect that the input clause is subsumed by the index, we need to show two things. We first need to show that for all the literals of a clause, a more general literal is returned by the index. In addition, we need to show that for each such literal is associated a "compatible" substitution. Two substitutions are "compatible" if they agree on all variables in the intersection of their domains. This requirement means that if *d* is a clause in the index that subsumes the input clause *c*, then if each of the literals of *d* subsumes a literal of *c* with a "compatible" substitution, then there is indeed a substitution *σ* such that each of the literals of *d* subsumes a literal of *c* with *σ*.
It should be noted that the above technique is based on set subsumption, in contrast to multiset subsumption, which is commonly used. A main reason for preferring multisets over sets, is to prevent cases where a clause is subsumed, no change to the index is made and the input clause is not inserted into the index.
### 3.3.2 Backward Subsumption
Backward subsumption checks if an input clause is subsumed by a clause in the index. In case the clause is subsumed, no change to the index is made and the input clause is not inserted into the index.
**Definition 25** (Forward Subsumption **fsum**). Let *T* be a substitution tree and *c* = *l*₁ ∨ ... ∨ *lₙ* a clause. Let
\[ A = \{ g\text{-}retrieve(T, \{x \mapsto l_i\} \mid 0 < i \leq n) \} \]
The function **fsum**(*T*, *c*) returns true if there are in *A* the labels (*l*, *M*₁), ..., (*l*, *M*ᵦ) such that *k* = |*l*| is the number of literals of clause *l*, *k* ≤ *n*, and for each two sets *M*ᵢ and *M*ᵢ for 0 < *i* < *j* ≤ *k*, there are substitutions *σ*ᵢ ∈ *M*ᵢ and *σ*ᵢ ∈ *M*ᵢ such that *σ*ᵢ(*x*) = *σ*ᵢ(*x*) for all
\[ x \in \text{dom}(σᵢ) \cap \text{dom}(σᵢ). \]
**Example 26.** We will follow the computation of **fsum**(*T*, *c*) where *T* is the substitution tree of Fig. 1 and *c* is clause (5) from Ex. 7.
- We first compute the set *A* which is the union of **g-retrieve**(*T*, \{*x*₀ \mapsto (a (b *u*₁) *u*₂)) and **g-retrieve**(*T*, \{*x*₀ \mapsto ~((a (b *u*₃) *u*₂))).
- the first set was already computed in Ex. 17 and resulted in \{((3), \{*u*₁ \mapsto *u*₁, *u*₂ \mapsto *u*₂\}), ((4), \{*u*₁ \mapsto λz.(a (b *u*₁) *u*₂\})\}.
- the second set is computed in a similar way and results in \{((4), \{*u*₃ \mapsto *u*₃, *u*₂ \mapsto *u*₂\}), ((3), \{*u*₃ \mapsto λz.(a (b *u*₃) *u*₂)\})\}.
- since the size of both (3) and (4) is 2 and the number of occurrences of the labels of each is 2, we are just left with checking if the matching substitutions are compatible, which is easily verified since their domains are disjoint.
- we return that (5) is subsumed by the index (by both (3) and (4)).
**Claim 27.** If **fsum**(*T*, *c*) returns true, then there is a clause *d* indexed by *T* such that *c* is subsumed by *d*.
**3.3.2 Backward Subsumption**
Similarly to the way we treated forward subsumption, we can also define backward subsumption. When we detect that the index contains clauses which are subsumed by an input clause, we will need to modify the index in order to delete these clauses.
In order to detect all clauses in the index which are subsumed by the input clause, we need to show that the sets returned for each literal of the input clause contain the labels of the subsumed clauses and that the substitution sets associated with the same labels across sets contain "compatible" substitutions.
**Definition 28** (Backward Subsumption \(b_{sum}\)). Let \(T\) be a substitution tree and \(c = l_1 \lor \ldots \lor l_n\) a clause. Let \(A_i = i\text{-}\text{retrieve}(T, \{x_0 \mapsto l_i\})\) for all \(0 < i \leq n\). Label \(l \in b_{sum}(T, c)\) if \(n \leq \|l\|\) and \((l, M_i) \in A_i\) for all \(0 < i \leq n\) such that for each two sets \(M_i\) and \(M_j\) for \(0 < i < j \leq n\), there are substitutions \(\sigma_i \in M_i\) and \(\sigma_j \in M_j\) such that \(\sigma_i(x) = \sigma_j(x)\) for all \(x \in \text{dom}(\sigma_i) \cap \text{dom}(\sigma_j)\).
**Example 29.** We will follow the computation of \(b_{sum}(T, c)\) where \(T\) is the substitution tree of Fig. 1 and \(c\) is clause (6) from Ex. 7.
- We first compute the sets \(A_1 = i\text{-}\text{retrieve}(T, \{x_0 \mapsto (u_1 \ u_2)\})\) and \(A_2 = i\text{-}\text{retrieve}(T, \{x_0 \mapsto \neg(u_3 \ u_2)\})\).
- the set \(A_1\) was already computed in Ex. 22 and results in \(\{(1), \{\{u_1 \mapsto \lambda z. = (a (b u))u, \ldots\}\}, \{(2), \{\{u_1 \mapsto \lambda z. = (a (b u))u_2)\}(u_1 u_2), \ldots\}\}, \{(3), \{\{u_3 \mapsto \lambda z. = (a (b u))u_2)\}, \ldots\}\}, \{(4), \{\{u_1 \mapsto \lambda z. = (a (b u))u_2)\}, \ldots\}\}\).
- the set \(A_2\) can be computed in a similar way and results in \(\{(3), \{\{u_3 \mapsto \lambda z. = (a (b u))u_2)\}, \ldots\}\}, \{(4), \{\{u_3 \mapsto \lambda z. = (a (b u))u_2)\}, \ldots\}\}\).
- we notice that only the labels (3) and (4) occur in both sets \(A_1\) and \(A_2\) and that each of the substitutions from \(A_2\) can be matched with two substitutions in \(A_1\).
- both options are compatible, the first option means that each literals of clause (6) subsumes a different literal of clauses (3) and (4) while the second option means that they subsumes the same literal in these clauses.
- we conclude that they are both redundant.
- the resulted tree after their removal was computed in Ex. 14.
**Claim 30.** If \(l \in b_{sum}(T, c)\) and the clause \(d\) is labeled by \(l\), then \(d\) is subsumed by \(c\).
**Example 31.** The result of inserting clause (6) to the substitution tree from the previous example can be seen in Fig. 5.
4 Conclusion and Future Work
In this work, we have presented an indexing data structure for higher-order clause subsumption based on substitution trees. We make use of an efficient higher-order pattern anti-unification algorithm for calculating meaningful generalizations of two arbitrary higher-order terms. The proposed indexing method is rather limited as it does not support subsumption testing modulo associativity and commutativity which is, of course, essential for general clause subsumption. However, even the limited approach admits effective subsumption queries in certain cases. Additionally, improvements for including such AC aspects are sketched.
While the index is not size-optimal in general, we believe that the approach performs quite good in practice, especially when combined with further, orthogonal indexing techniques that could be used as a pre-test. One suitable candidate is a higher-order variant of feature vector indexing [18].
The substitution tree index as described here is planned for implementation in the Leo-III prover [24]. We hope that due to Leo-III’s agent-based architecture [20], independent agents can traverse the substitution tree index in parallel. The index is mainly devised for employment in resolution-based provers, but it seems possible to generalize the approach to non-clausal-based deduction procedures.
For further work we need to investigate means of suitably enhancing mag to handle AC properties and other subsumption properties. Also, the matching algorithm could be improved such that it returns minimal complete sets of substitutions which can then be used for the subsumption procedure. At the current state of the index data structure, the inserted substitutions are not normalized. This is essentially a shortcoming that originates from the way we relate the matching substitutions of different literals of the same clause to each other. This results, however, in a substitution tree that contains occurrences of substitutions which are equivalent up to free variables renaming. To overcome this shortcoming, we need to find a way of keeping the substitutions normalized while still being able to relate the matchers of different literals.
Acknowledgements
Work of the first author was funded by the ERC Advanced Grant ProofCert. The second author has been supported by the DFG under grant BE 2501/11-1 (Leo-III). We thank the reviewers for the very valuable feedback they provided.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01424749/file/paper.pdf", "len_cl100k_base": 12774, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 54744, "total-output-tokens": 15299, "length": "2e13", "weborganizer": {"__label__adult": 0.0004069805145263672, "__label__art_design": 0.0006737709045410156, "__label__crime_law": 0.0006074905395507812, "__label__education_jobs": 0.0037899017333984375, "__label__entertainment": 0.00018739700317382812, "__label__fashion_beauty": 0.00023162364959716797, "__label__finance_business": 0.0005106925964355469, "__label__food_dining": 0.0007128715515136719, "__label__games": 0.0015048980712890625, "__label__hardware": 0.0013141632080078125, "__label__health": 0.0009775161743164062, "__label__history": 0.0005488395690917969, "__label__home_hobbies": 0.00024330615997314453, "__label__industrial": 0.0009665489196777344, "__label__literature": 0.0015249252319335938, "__label__politics": 0.0004856586456298828, "__label__religion": 0.000827789306640625, "__label__science_tech": 0.389404296875, "__label__social_life": 0.00021028518676757812, "__label__software": 0.01377105712890625, "__label__software_dev": 0.580078125, "__label__sports_fitness": 0.0003421306610107422, "__label__transportation": 0.0008177757263183594, "__label__travel": 0.0002161264419555664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52193, 0.03114]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52193, 0.64427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52193, 0.85021]], "google_gemma-3-12b-it_contains_pii": [[0, 1183, false], [1183, 4199, null], [4199, 9828, null], [9828, 15470, null], [15470, 19999, null], [19999, 23055, null], [23055, 27142, null], [27142, 29130, null], [29130, 34050, null], [34050, 38769, null], [38769, 42980, null], [42980, 45760, null], [45760, 49785, null], [49785, 52193, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1183, true], [1183, 4199, null], [4199, 9828, null], [9828, 15470, null], [15470, 19999, null], [19999, 23055, null], [23055, 27142, null], [27142, 29130, null], [29130, 34050, null], [34050, 38769, null], [38769, 42980, null], [42980, 45760, null], [45760, 49785, null], [49785, 52193, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52193, null]], "pdf_page_numbers": [[0, 1183, 1], [1183, 4199, 2], [4199, 9828, 3], [9828, 15470, 4], [15470, 19999, 5], [19999, 23055, 6], [23055, 27142, 7], [27142, 29130, 8], [29130, 34050, 9], [34050, 38769, 10], [38769, 42980, 11], [42980, 45760, 12], [45760, 49785, 13], [49785, 52193, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52193, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
d93694b6ceb05985208e837029ef6c7b26164bb7
|
[REMOVED]
|
{"Source-Url": "http://goanna.cs.rmit.edu.au/~johthan/publications/DALTstates.pdf", "len_cl100k_base": 12871, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 56816, "total-output-tokens": 15214, "length": "2e13", "weborganizer": {"__label__adult": 0.00033020973205566406, "__label__art_design": 0.0005125999450683594, "__label__crime_law": 0.0003597736358642578, "__label__education_jobs": 0.0012340545654296875, "__label__entertainment": 9.28640365600586e-05, "__label__fashion_beauty": 0.00018084049224853516, "__label__finance_business": 0.00040221214294433594, "__label__food_dining": 0.0003101825714111328, "__label__games": 0.001010894775390625, "__label__hardware": 0.0008130073547363281, "__label__health": 0.0005297660827636719, "__label__history": 0.0003294944763183594, "__label__home_hobbies": 0.00014257431030273438, "__label__industrial": 0.0004703998565673828, "__label__literature": 0.0003938674926757813, "__label__politics": 0.0002765655517578125, "__label__religion": 0.0004448890686035156, "__label__science_tech": 0.056396484375, "__label__social_life": 9.65595245361328e-05, "__label__software": 0.00989532470703125, "__label__software_dev": 0.92431640625, "__label__sports_fitness": 0.0003077983856201172, "__label__transportation": 0.0007653236389160156, "__label__travel": 0.00021564960479736328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57726, 0.01808]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57726, 0.47029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57726, 0.91832]], "google_gemma-3-12b-it_contains_pii": [[0, 2880, false], [2880, 6666, null], [6666, 10150, null], [10150, 13751, null], [13751, 16030, null], [16030, 19667, null], [19667, 19714, null], [19714, 23386, null], [23386, 26697, null], [26697, 30329, null], [30329, 35975, null], [35975, 39788, null], [39788, 43800, null], [43800, 47625, null], [47625, 51507, null], [51507, 54605, null], [54605, 57726, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2880, true], [2880, 6666, null], [6666, 10150, null], [10150, 13751, null], [13751, 16030, null], [16030, 19667, null], [19667, 19714, null], [19714, 23386, null], [23386, 26697, null], [26697, 30329, null], [30329, 35975, null], [35975, 39788, null], [39788, 43800, null], [43800, 47625, null], [47625, 51507, null], [51507, 54605, null], [54605, 57726, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57726, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57726, null]], "pdf_page_numbers": [[0, 2880, 1], [2880, 6666, 2], [6666, 10150, 3], [10150, 13751, 4], [13751, 16030, 5], [16030, 19667, 6], [19667, 19714, 7], [19714, 23386, 8], [23386, 26697, 9], [26697, 30329, 10], [30329, 35975, 11], [35975, 39788, 12], [39788, 43800, 13], [43800, 47625, 14], [47625, 51507, 15], [51507, 54605, 16], [54605, 57726, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57726, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
e2943ac4a072ffb8a26b97b2b3e4fb6d56ba71de
|
ABSTRACT
The Interactive Data Base Designer (IDBD) assumes as input a conceptual description of data to be stored in the data base (in terms of a binary data model) and an expected work-load in terms of navigations in the conceptual model. Extensive checking of input is performed. The designer has the possibility to restrict the solution space of the design algorithm by prescribing implementation strategies for parts of the binary model.
1. Introduction
Before we can design a data base schema, compatible with some existing Data Base Management System, we have to determine what kind of data it should contain and what kind of work-load, in terms of queries, updates, inserts and deletes it must be able to handle. In order to permit examination of alternative solutions the requirements must be stated in as implementation independent terms as possible. By 'implementation independent' we mean that there have been made no decisions on how to group data items in records, which access techniques to use and how to navigate in a structure of records and sets.
Designing a data base is thus only a (relatively small) part of a system development process. It is preceded by a number of activities the purpose of which is to analyze corporate information needs and to specify the requirements of an information system to be developed.
The Interactive Data Base Designer (IDBD) presented in this paper has been developed to be compatible with two kinds of system design (philosophies) approaches.
The first kind, the analytical approach, proceeds through development phases like
- goal and problem analysis
- activity analysis etc.
and arrives at a comprehensive set of requirements specifications. This set also includes a conceptual information model of relevant parts of the enterprise and a set of information requirements [Bub-80]. The conceptual information model (CIM) describes and defines relevant phenomena (entities, relations, events, assumptions, inference rules etc.) of the Universe of Discourse (UoD). The CIM models the UoD in an extended time...
perspective in order to capture dynamic rules and constraints.
The next step in this approach is to 'restrict' the CIM (from a time perspective point of view) and to decide what information to store in the data base and how to conceptually navigate in this set of information in order to satisfy stated information requirements (see Gus-82 for a comprehensive exposition of this problem).
If the information to be stored in the data base is defined in terms of a binary data model and the conceptual navigations are specified assuming such a model then this is the required input to IDBD.
The other approach to data base design is the experimental one. In this case we assume that a "fast prototype" is developed by the use of the CS4 system [Ber-77A]. CS4 employs a binary data model and is thus compatible with IDBD. Experimental use of the prototype system can provide us with statistics of navigation types and frequencies.
It is, of course, also advantageous to use the experimental approach as a complement to the purely analytical one in order to avoid guesswork concerning the requirements and the work-load.
The DBTG-schema design algorithm of IDBD has the binary data model and a set of implementation strategies in common with design-aids developed at the University of Michigan [Mit-75, Ber-77B, Pur-79]. It differs, however, from them in several important respects:
- the tool is interactive which gives the designer a possibility to monitor the design process
- comprehensive checking of the consistency of input data is performed (we have empirically found that it is difficult to supply a tool of this kind with correct input the first time; a waste of time and computer resources is the result of optimizing incorrect input)
- the designer has a possibility to prescribe certain implementation alternatives for parts of the model (or the whole model). This has the following advantages
- the designer can test his/her own solution alternatives which may be "natural" or which have other, preferred, non-quantifiable properties
- unusual or "non-sense" solutions can be avoided
+ the tool can be used to augment an existing DB-schema
+ the solution space can be drastically reduced thereby making IDBD a realistic tool also for design of large, complex data base schemas
- the description of the work-load is practically realistic as navigation in the conceptual binary model can be defined.
This paper describes and explains IDBD in terms of running a small sample case. Section 2 describes the input to IDBD — the conceptual binary data model and how to describe the work-load in terms of navigating in the model. User interaction, checking of input and how to supply IDBD with design directives is presented in section 3. The design algorithm and the results of performing a design run are given in section 4.
2. Input data
The input to the IDBD consists of the following types of data:
- description of the conceptual data model - its data item types (representing entity types) and relations
- a description of the workload of the model defined in terms of run-units
- a description of certain parameters to be considered by the design algorithm
- a set of design directives to the design algorithm restricting its solution space.
A consistency check is performed on the model and the work-load descriptions. Also an analysis is performed to estimate the number of references to the data items and the relations when navigating in the model.
The input, interaction and processing of IDBD will be illustrated by a small practical case, the enterprise GROSS. The following is assumed.
GROSS is a local supplier, i.e. it supplies parts to customers located in the same city. Parts are distributed by cars and one cargo is called a delivery. One delivery can contain several orders, 1 to 20. Every day there are several deliveries, 1 to 20. Customers send their orders to GROSS. An order includes 1 to 25 parttypes. When an order arrives, its day for delivery is determined.
The following information requirements - in terms of queries - are defined in the preceding design stages.
1. For a particular day, all the customers which are to be supplied, and for each customer: name and address.
2. For a particular part type, the orders in which it is included and the day of delivery for each order.
3. For a particular part type and for a particular customer, the total dollar amount for the part in order.
4. For all orders, print all part types with their amounts.
Assume also that the following transactions have been defined:
1. Deletion of a delivery.
2. Insertion of a delivery.
3. Insertion of a new customer.
4. Updating of part attributes.
This constitutes the basis for the conceptual (binary) data model and its work-load.
2.1 Description of the conceptual data model
We assume that the following binary relational data structure has been created on the basis of an analysis of the enterprise, the conceptual information model and the information requirements.
The model entity types are assumed represented in this case by a set of data item types. In fact, as "partinfo" in the data structure, a data item type can also represent a group of data item types.
Data item types are described with the word ITEM (in capital letters) on one line of input and thereafter, on separate lines, one per each data item:
- name of the data item
- size of the data item in number of characters
- cardinality of the data item
- a security code (optional)
If different data items cannot be placed in the same record, for instance because of security constraints or distributed data, then the data item types can be specified with different security code numbers. If the security code is not specified, it is put to zero.
In our case the data item types are specified as follows:
ITEM
delivno 5 150
delivday 6 30
orderno 6 300
ord-part 0 1500
partno 8 2000
partinfo 92 2000
amount 8 1500
custno 5 1000
custname 30 1000
custadr 30 1000
Binary relations in the data model are described with the word RELATION on one input line followed by one line per relation containing:
- name of the relation
- name of the data item from which the relation origins (item1)
- name of the data item to which the relation is directed (item2)
- the number of instances of the relation
- minimum number of item1 related to one item2
- maximum number of item1 related to one item2
- minimum number of item2 related to one item1
The relations in our case are specified as follows:
RELATION
dday-dno delivday delivno 150 1 1 1 20
dno-ord delivno orderno 300 1 1 1 20
cust-ord custno orderno 300 1 1 0 50
c-cname custno custname 1000 1 1 1 1
c-cadr custno custadr 1000 1 1 1 1
ord-o-p orderno ord-part 1500 1 1 1 25
o-p-amnt ord-part amount 1500 1 1 1 1
part-o-p partno ord-part 1500 1 1 0 200
p-pinfo partno partinfo 2000 1 1 1 1
For the specified input data for a relation and the cardinality of the participated data items, IDBD determines the average number of item2 related to item1, the type of the relation (1:1, 1:M, M:1, M:N) and what type of mapping, total (T) and partial (P), that the domain and range of the relation participates in. This will be further discussed in section 3.1.
2.2 Description of the work-load
The work-load is generated by the information requirements (queries) and the transactions. They imply a need to navigate in the binary relational structure.
In order to show how navigations can be defined, two examples are given.
Example 1 For query 1 in our example the following access path is defined:
This access path could be described in natural language as follows:
For a particular delivday
get all delivno related to it via the relation dday-dno
for all these delivno get all orderno
which are related to all these delivno via the relation dno-ord
for all these orderno get the custno
related to these orderno via the relation cust-ord
for the custno, get
custname related to it
via the relation c-cname,
custadr related to it
via the relation c-cadr
The navigation starts always at the level 1, where the entry-point, i.e. the starting item, and the operation for it is described. In example 1 the starting item is delivday.
After delivday has been accessed, all delivno related to delivday are to be accessed. This is done at the next higher level. We call the item delivday qualifier of delivno, because delivno related to delivday is requested. In the access path, each time when the latest accessed item is used as a qualifier for next item wanted, a next higher level is specified.
When the same item is used as qualifier for several required items, that can be specified at the same level. Custname and custadr have the same qualifier custno and therefore they are specified at the same level.
Example 2 In query 3 we want to get the dollar amount for the parts in order for a particular parttype and for a particular customer.
We can start the navigation either with access to partno or to custno. Here we choose to start with partno.
We access a unique partno, thereafter all ord-part related to it. At this stage we do not know for which of the ord-part we want to get the dollar amount, because we do not know to which customer ord-part is related. Therefore, we continue by accessing orderno for each of ord-part and then access the customer related to each order. Now we know which orders are related to the required customer. Now we need to know the ord-part related to those orders. We have already accessed ord-part and therefore we do not need to access them again. By a SELECT operation we can select the ord-part related to the required customer without additional accesses.
Going backwards in the access path can be done in two different ways. If the item to be accessed has the same qualifier as the item at the next higher level, then that higher level is specified. If one wants to skip one or more higher levels without making any access to the database, it can be done by using the special operation SELECT.
The following example shows the way query 1 and 3 can be expressed in terms of a run-unit.
IDBD uses the following syntactical rules for description of the run-units.
Run-units are described with the word RUN-UNIT followed by the run-units themselves. Every run-unit starts with a head line containing:
- the word RU
- name of the run-unit
- cardinality of the run-unit
After the head line the run-unit is described in a hierarchical way with level numbers much like a COBOL data declaration. On each line there is either a level description or a data operation description.
The level description lines are numbered from 1 and upwards. The first line after the head line is always a level description line with level number 1.
Each level description line contains:
- the level number
- optionally a cardinality
Normally the cardinality for a level is one. In that case there is no need to specify the cardinality number. Otherwise, a real number both < 1 (a probability) and > 1 (a frequency) can be specified. This cardinality multiplies with the cardinality on the next lower level to give the cardinality on the actual level. If a cardinality is specified on level 1, it multiplies with the cardinality of the run-unit.
On each level there can be specified zero, one or more occurrences of data operation descriptions. Every data operation description is defined on one line and contains:
- data operation verb
- name of the required data item
- optionally a relation name
Usually there is no need to specify a relation name. Only if there are more than one relation type between two data item types, it is necessary to specify the relation name. Otherwise the IDBD finds the relation type itself. The data item at the previous level is called the qualifier. At level 1, where there is no lower level, the access is directly to the data item type and not via a relation as at levels > 1.
The different data operation verbs are:
FIND
\[
\begin{align*}
\text{UNIQ} \\
\text{SEQ}
\end{align*}
\]
GET
\[
\begin{align*}
\text{FIRST} \\
\text{LAST} \\
\text{NEXT} \\
\text{PRIOR} \\
\text{ITH} \\
\text{ALL}
\end{align*}
\]
SELECT
\[
\begin{align*}
\text{DELETE} \\
\text{INSERT} \\
\text{MODIFY}
\end{align*}
\]
LINK
UNLINK
\[
\begin{align*}
\text{ALL}
\end{align*}
\]
where \{ \} means one of the elements inside \{ \} and [ ] means that the element inside [ ] is optional.
FIND defines an entry point for the run-unit. FIND UNIQ implies some sort of direct access to the data item, while FIND SEQ implies a sequential browse through all data items of the mentioned data item type.
By GET one or all data items which are related to the qualifier are accessed.
By SELECT no access is made, because already accessed data items are selected. This normally also means a branch from a higher to some lower numbered level.
By DELETE one or all data items related to the qualifier are deleted. DELETE causes also deletion (i.e. UNLINK) of the relation instance, which has the deleted data item as its destination.
INSERT is analogous to DELETE.
By MODIFY one or all data items related to the qualifier are modified.
By LINK one or all data items are linked to the qualifier. This means that the relation instance(s) is(are) stored in the data base.
By UNLINK one or all data items are unlinked.
At level 1 only the following data operation verbs can be specified:
\[
\begin{align*}
\text{FIND} \\
\text{GET} \\
\text{SELECT} \\
\text{LINK} \\
\text{UNLINK}
\end{align*}
\]
A final example shows the definition of a run-unit describing the work-load of transaction nr 2 for inserting a new delivery. For the inserted delivno it is assumed that its delivday is not already stored in the data base with a probability of 20%.
RU ins-supr 30
1
INSERT UNIQ delivno
2 0.2
INSERT delivday
2 0.8
LINK delivday
2
INSERT ALL orderno
1
LINK custno
INSERT ALL ord-part
4
INSERT amount
LINK partno
3. User interaction
3.1 Checking and analysis of input data
After the input phase and before the optimization phase, there is an interactive phase where also certain consistency checks are made.
First of all the IDBD asks for values of certain constants. These are...
- maximum record length in number of characters
- counter length in number of characters
- pointer length in number of characters
- maximum secondary storage in number of characters that is allowed
- CALC storage factor, a factor greater or equal to one, which tells how much more secondary storage than nominal is required for hashed storage
- CALC access factor, a factor greater or equal to one, which tells how many more accesses than one that is required due to synonyms in hashed storage.
From the values of the first three of these constants and from the earlier description of data items, relations and run-units, the program decides which consistency constraints must hold. IDBD finds out for each relation which implementation alternatives (IAs) are possible, which are unwanted and which IAs are impossible (OFF). The lists of IAs for these three cases can be displayed for the DBA. For the possible and unwanted cases the DBA can interactively change IAs. There is also a fourth case that the DBA can use, namely to specify that one special IA is to be used (ON).
The reason for changing the lists of IAs can be that an IA is already fixed or that the IA gives an unnormal solution. (An example of an unnormal solution would be the case, where "article-number" is suggested to be aggregated under "number-in-stock" instead of the other way around.) If the DBA reduces the solution space for the different relations, the execution time during the optimization phase is also substantially reduced. This is necessary for a large data base.
There are 21 different implementation alternatives, numbered 1-21 (see also [Ber-77B]). The IAs are displayed with their number. Here is a short description of each IA:
1 = Fixed duplication
2 = Fixed duplication reversed
3 = Variable duplication
4 = Variable duplication reversed
5 = Fixed aggregation
6 = Fixed aggregation reversed
7 = Variable aggregation
8 = Variable aggregation reversed
9 = Chain, next pointer
10 = Chain, next pointer reversed
11 = Chain, next + owner pointer
12 = Chain, next + owner pointer reversed
13 = Chain, next + prior pointer
14 = Chain, next + prior pointer reversed
15 = Chain, next + owner + prior pointer
16 = Chain, next + owner + prior pointer rev.
17 = Pointer array
18 = Pointer array reversed
19 = Pointer array + owner pointer
20 = Pointer array + owner pointer reversed
21 = Dummy record
For illustration, the implementation alternatives 15 and 17 are shown below.
For the different valid DBTG-structures IDBD calculates an estimate of the number of accesses to the data base for every run-unit. The need for secondary storage is also calculated.
The best solutions are presented to the user. "The boot solutions" are those with the least number of accesses for a certain secondary storage size. The solutions are presented in the order of increasing number of accesses. At the same time the secondary storage requirements are decreasing in order to belong to the set of best solutions. (The first 10 solutions are always added to the set.)
For different DBTG-structures IDBD also gives some messages such as:
- an entry point for an item is missing
- there is no access path to an item
- there is no SYSTEM entry (sequential access) to an item
A typical user interaction is exemplified in the next section.
The analysis of input data will be illustrated by the following examples. IDBD has given the following output from the input data checking procedures for this particular case:
### Analysis of data items:
<table>
<thead>
<tr>
<th>Data items</th>
<th>name</th>
<th>size</th>
<th>card. DA-ref.no</th>
<th>seq-ref.no</th>
<th>secur.</th>
</tr>
</thead>
<tbody>
<tr>
<td>delivno</td>
<td>5</td>
<td>150</td>
<td>120</td>
<td></td>
<td></td>
</tr>
<tr>
<td>delivday</td>
<td>6</td>
<td>30</td>
<td>30</td>
<td></td>
<td></td>
</tr>
<tr>
<td>orderno</td>
<td>6</td>
<td>300</td>
<td>300</td>
<td></td>
<td></td>
</tr>
<tr>
<td>ord-part</td>
<td>0</td>
<td>1500</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>partno</td>
<td>8</td>
<td>2000</td>
<td>1630</td>
<td></td>
<td></td>
</tr>
<tr>
<td>partinfo</td>
<td>92</td>
<td>2000</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>amount</td>
<td>8</td>
<td>1500</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>custno</td>
<td>5</td>
<td>1000</td>
<td>20</td>
<td></td>
<td></td>
</tr>
<tr>
<td>custname</td>
<td>30</td>
<td>1000</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>custadr</td>
<td>30</td>
<td>1000</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Explanation:**
- DA-ref.no means the number of references (logical accesses), which are needed directly to the data item type
- seq-ref.no means the number of sequential accesses, which are needed to the data item type
These numbers give the data base administrator or designer some indication of to which data items there will be a need for direct access and/or sequential access.
### Analysis of relations:
<table>
<thead>
<tr>
<th>Relations</th>
<th>rel.name</th>
<th>type</th>
<th>item1</th>
<th>item2</th>
<th>card. min/sv/max1</th>
<th>min/sv/max2</th>
</tr>
</thead>
<tbody>
<tr>
<td>ddno-dno</td>
<td>l:1</td>
<td>T/T</td>
<td>delivno</td>
<td>delivno</td>
<td>150 1</td>
<td>1.0</td>
</tr>
<tr>
<td>dno-ord</td>
<td>l:1</td>
<td>T/T</td>
<td>delivno</td>
<td>orderno</td>
<td>350</td>
<td>1.0</td>
</tr>
<tr>
<td>cust-ord</td>
<td>l:M</td>
<td>T/M</td>
<td>custno</td>
<td>orderno</td>
<td>1000</td>
<td>1.0</td>
</tr>
<tr>
<td>c-name</td>
<td>l:1</td>
<td>T/T</td>
<td>custno</td>
<td>custname</td>
<td>1000</td>
<td>1.0</td>
</tr>
<tr>
<td>on-p</td>
<td>l:M</td>
<td>M:1</td>
<td>partno</td>
<td>partinfo</td>
<td>1500</td>
<td>1.0</td>
</tr>
<tr>
<td>p-p-amnt</td>
<td>l:1</td>
<td>T/T</td>
<td>partno</td>
<td>amount</td>
<td>1500</td>
<td>1.0</td>
</tr>
</tbody>
</table>
**Explanation:**
- type is the type of mapping (1:1, 1:M, M:1, M:N)
- T/P stands for total (T) and partial (P) mappings between the data items in the domain and range of the relation
- min/sv/max1 shows the minimum, average and maximum number of item1 related to item2
- min/sv/max2 is analogous, but concerns the reverse relation
### Analysis of reference frequencies to relations:
<table>
<thead>
<tr>
<th>Relations with reference count</th>
<th>rel.name</th>
<th>item1</th>
<th>item2</th>
<th>ref.no fwd</th>
<th>%</th>
<th>ref.no bwd</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>ddno-dno delivno delivno</td>
<td>150 1.2</td>
<td>141 1.1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>dno-ord delivno orderno</td>
<td>540 4.2</td>
<td>75 0.6</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>cust-ord custno orderno</td>
<td>443 3.5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>c-name custno custname</td>
<td>320 2.5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>on-p partinfo partinfo</td>
<td>2700 21.1</td>
<td>98 0.8</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>p-p-amnt partno partinfo</td>
<td>2790 21.8</td>
<td>2100 16.4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>p-pinfo partno partinfo</td>
<td>3000 23.5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Explanation:**
- rel.no fwd shows the number of required references from item1 to item2 in the relation (forwards). Every update operation (DELETE, INSERT and MODIFY) is calculated as 2 references.
- rel.no bwd shows the number of required references from item2 to item1 in the relation (backwards).
These numbers give the designer some indication of which relations are critical for the efficiency of the data base system. Those relations can then be analyzed in greater detail.
The analysis of run-units shows for each run-unit the following type of output (exemplified for run-units "custinf", "amount" and "ins-supp").
Look at/change implementation alternatives? (y/n) Y
Instructions? (y/n) y
You will see all or specified relations with their relation name and what consistency constraints that hold for the different implementation alternatives.
The different consistency constraints are:
- **ON** = the relation shall have this impl. alt.
- **POSSIBLE** = possible impl. alternatives
- **UNWANTED** = not desirable impl. alternatives
- **OFF** = these impl. alt. are not permitted
You can change among the **ON**, **POSSIBLE** and **UNWANTED** implementation alternatives by writing the implementation alternative number and the letter O, P or U respectively on one line.
Do you want all or specified relations displayed? (a/s) a
day-dno
ON= 1 2 3 5 7 11 15 19
POSSIBLE= 1 2 3 5 7 11 15 19
UNWANTED= 9 13 17
OFF= 4 6 8 10 12 14 16 18 20 21
change? y
change=15 o
more changes? n
day-dno
ON= 15
POSSIBLE= 1 2 3 5 7 11 19
UNWANTED= 9 13 17
OFF= 4 6 8 10 12 14 16 18 20 21
==
p-pininfo
ON= POSSIBLE= 5
UNWANTED= 6 9 10 11 12
OFF= 1 2 3 4 7 8 13 14 15 16 17 18 19 20 21
change? n
Note: For all 1:M-relations except for dno-ord the suggested IA is in this example put to 15, e.g. chain with both owner and prior pointer. For all 1:1-relations the only possible IA is left with number 5, e.g. fixed aggregation.
IDBD will then continue with the following listing:
There are 8 different DBTG-structures to examine.
Do you want to reduce the solution space further? n
Do you want to limit the CPU-time when optimizing? n
Note: Normally IDBD will save the best 10 solutions (with least number of accesses to the data base). In this case there are only 8 possible solutions, of which 6 of these are accepted as correct DBTG-structures.
4. The Design-aid
4.1 The algorithm
4.1.1 Interactivity The three different design tools developed at the University of Michigan [Mit-75, Ber-77B, Pur-79] all work in a similar way. The design aids are typically batch programs. From a specified input the tools produce, after an optimization phase, efficient data structures. Some problems with this type of tools are that
- they sometimes produce unnormal solutions
- they restrict themselves to a reduced solution space
- the designer has no possibility of testing his/her own data structures, which may be more natural or which may have other (non-quantifiable) desirable properties
- they take, in spite of sophisticated optimization algorithms, too long time to run on a computer for normal sized data bases.
To overcome these problems the design aid has to be interactive. The data base designer has then the possibility to manipulate with the solution space so that these problems do not have to arise.
4.1.2 Validity constraints Certain validity constraints must be satisfied in order to arrive at a valid DBTG data structure.
Some of these rules cannot be checked until a DBTG structure with records and sets is created. The validity constraints, which are then tested, are
- record lengths are within permitted limits
- a record type has no repeating groups on a level > 1
- a data item type is not aggregated more than once
- that a set has not the same record type both as an owner and as a member.
Most of the validity constraints can, however, be checked before the interaction with the data base designer takes place. This is done by determining which IAs are valid or not valid for every relation. Unlike the Michigan design tools, IDBD does not just distinguish between valid and not valid IAs but also categorizes valid IAs into preferable(possible) and unwanted IAs. In this way the tool helps the designer to choose good data structures. If the designer lets IDBD to decide, IDBD has then a smaller number of IAs to consider. The solution space is thereby considerably reduced.
Some of the validity constraints, which are checked before the interaction, are
- maximum record length is not violated
- if the data item types in the relation have different security codes, duplication and aggregation are ruled out
- for a relation, where there exists a partial mapping from A to B, such that there exist B data items that are not related to any A data item, it is impossible to aggregate B under A such that all B data items are represented.
Suppose there is a 1:M-type of relation between A and B data items so that one A data item is related to zero, one or more B data items. The validity constraints, which are checked in this case, are (there are analogous rules for 1:1- M:1- and M:N-relations)
- it is impossible to aggregate A under B
- a chain or pointer array can not be used in reversed direction
- there is no need for dummy records
- duplicating of A under B is done with fixed length, not variable length
- if references in the run-units only traverse in the direction of the relation, e.g. from A to B, there is no need for owner pointers or for duplication of A under B
- if there is only traversing in the opposite direction, duplication or aggregation of B under A and chain or pointer array without owner pointer are made unwanted
- if there is traversing in both directions, we need owner pointers, e.g. chain or pointer array without owner pointer are made unwanted.
4.1.3 Cost calculation The storage cost is calculated as the sum of the lengths of all records, pointers and wasted areas for hashed record types. These wasted areas are estimated by the aid of the parameter "CALC storage factor".
There are a few assumptions about the record type implementation and the DBTG data base management system. If a record type is only accessed with direct access, then it is assumed that the record type will be hashed (CALC in Codasyl). If a record type is accessed sequentially, then the record type is made a member in a Codasyl SYSTEM set (Singular set). If there is a need for direct access to more than one data item type in a record type, then it is created a secondary index. This index is assumed to be implemented by a pointer array.
The access cost is calculated for each run-unit. We differentiate between three types of access costs.
These are the number of:
- sequential accesses, where a scan through the records for a certain record type is made. These accesses can be cheaper than the other types, if the records are clustered into blocks.
- CALC accesses, which are accesses to hashed records. The number is multiplied with the parameter “CALC access factor”
- pointer accesses, which are the accesses through pointers.
The number of accesses calculated is an estimation of the number of physical accesses. Records, which are stored in the same block or already stored in primary storage in buffers, are not considered.
To get an estimate of the number of accesses, IDBD takes care of the hierarchical structure in the run-units with different number of data items required on each level. Average values are used. IDBD also considers
- the cases where the data item types are stored in the same record type
- what kind of data entry there is defined to the record type
- the combination of TAs and verbs in the run-unit.
As an example of the number of pointer accesses calculated, consider the verbs GET FIRST, MODIFY FIRST and INSERT FIRST for an implementation alternative with a chain with prior pointers, for instance TA=15. Table 1 shows the figures.
### Table 1. Number of pointer accesses for certain verbs
<table>
<thead>
<tr>
<th>Run-unit verb</th>
<th>Number of pointer accesses</th>
</tr>
</thead>
<tbody>
<tr>
<td>GET FIRST</td>
<td>1</td>
</tr>
<tr>
<td>MODIFY FIRST</td>
<td>2</td>
</tr>
<tr>
<td>INSERT FIRST</td>
<td>5</td>
</tr>
</tbody>
</table>
In the case of INSERT FIRST both the owner and the previous first member have to be read and written and the new first member shall be written, e.g. 5 accesses.
### 4.2 The results
Each solution alternative contains information about:
- record structures
- set types
- total storage cost
- total number of accesses
- the access cost for each run-unit.
This information is, for each solution, displayed as follows (examplified by the solution alternative with the lowest access cost).
There are 6 record types
Record type no 1 (delivday) has 30 records
0 delivday 6 char. CALC access
Record length: 6 char.
Record type no 2 (delivno ) has 150 records
0 delivno 5 char. CALC access
Record length: 5 char.
Record type no 3 (orderno ) has 300 records
0 orderno 6 char. seq. access
Record length: 6 char.
Record type no 4 (custno ) has 1000 records
0 custno 5 char. CALC access
1 custname 30 char.
1 custadr 30 char.
Record length: 65 char.
Record type no 5 (ord-part) has 1500 records
0 ord-part 0 char.
1 amount 8 char.
Record length: 8 char.
Record type no 6 (partno ) has 2000 records
0 partno 8 char. CALC access
1 partinfo 92 char.
Record length: 100 char.
There are 5 set types
Set type no 1 (dday-dno) has 150 instances
Owner = record type no 1 30 records
Member = record type no 2 150 records
The set is implemented with DBTG-set
with owner pointer and prior pointer
Set type no 2 (dno-ord) has 300 instances
Owner = record type no 2 150 records
Member = record type no 3 300 records
The set is implemented with DBTG-set
with owner pointer
Set type no 3 (cust-ord) has 300 instances
Owner = record type no 4 1000 records
Member = record type no 3 300 records
The set is implemented with DBTG-set
with owner pointer and prior pointer
Set type no 4 (ord-o-p) has 1500 instances
Owner = record type no 3 300 records
Member = record type no 5 1500 records
The set is implemented with DBTG-set
with owner pointer and prior pointer
Set type no 5 (part-o-p) has 1500 instances
Owner = record type no 6 2000 records
Member = record type no 5 1500 records
The set is implemented with DBTG-set
with owner pointer and prior pointer
The storage cost for the database is 391836 char.
Run-unit custinf has the access cost:
seq. access = 0
CALC access = 1
pointer access = 25
Run-unit ord-day has the access cost:
seq. access = 0
CALC access = 1
pointer access = 3
Run-unit amount has the access cost:
seq. access = 0
CALC access = 1
pointer access = 2
Run-unit orders has the access cost:
seq. access = 300
CALC access = 0
pointer access = 3000
Run-unit del-sup has the access cost:
seq. access = 0
CALC access = 1
pointer access = 77
Run-unit ins-sup has the access cost:
seq. access = 0
CALC access = 1
pointer access = 106
Run-unit ins-cust has the access cost:
seq. access = 0
CALC access = 1
pointer access = 3
Run-unit upd-part has the access cost:
seq. access = 0
CALC access = 1
pointer access = 1
The total number of accesses = 14045
Table 2 shows the storage and access costs for the 6 alternatives in this sample case.
<table>
<thead>
<tr>
<th>Solution</th>
<th>Access number</th>
<th>Storage cost</th>
<th>No. of record types</th>
<th>No. of set types</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>14045</td>
<td>391836</td>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>2</td>
<td>14165</td>
<td>393036</td>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>14313</td>
<td>420636</td>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>76512</td>
<td>390336</td>
<td>6</td>
<td>4</td>
</tr>
<tr>
<td>5</td>
<td>100025</td>
<td>391336</td>
<td>6</td>
<td>4</td>
</tr>
<tr>
<td>6</td>
<td>100025</td>
<td>410436</td>
<td>6</td>
<td>4</td>
</tr>
</tbody>
</table>
**TABLE 2. Access and storage costs**
The schemata for solution alternatives 1 and 4 are graphically illustrated in figures 1 and 2.

**Figure 1. Solution alternative no. 1**
In solution alternative no. 4 is delivno duplicated under orderno. This means that to get an orderno from a given delivno in the records with orderno\delivno, those records have to be scanned. That is why the access cost in this case raises tremendously.
4.3 The tool
IDBD is running on VAX computers with UNIX operating system. The program call is a standard UNIX call:
```
```
It is possible to name the different files at the program call with parameters. Otherwise the names of the files will be asked for by the IDBD during execution. The different files are:
- **infile**: The name of the input data file, which contains the data items, the relations and the run-units.
- **outfile**: The name of the file, where the lengthy listings are stored. The filename /dev/tty means the terminal itself. The filename /dev/null produces no output file.
- **parmfile**: Instead of answering questions about the values for the different parameters, these can be stored in a file. The name of that file is given with this parameter.
- **rfile**: If there is just one DBTG structure to analyze, put the IAs for the relations 1, 2 etc. in a file and name that file with this parameter at the program call.
- **sofile**: It is possible to interrupt the program after the input phase in order to examine the output from the input checking procedures. Give in that case this parameter. The data is saved in the file sofile.
- **sifile**: To continue processing after an interrupt with -s sofile give the name of the saved file with this parameter.
5. Discussion
Discussions with practitioners in the field has disclosed that one is not always interested in optimizing the total DB-schema. For many reasons (reliability, security, modifiability, comprehensibility etc.) the designers may wish to implement parts of the data base in a specific way and leave the rest of it (if any) to the "optimizer". The difficulty of predicting future workloads may also make optimization in the strict sense - more an intellectual exercise than a realistic approach. An optimal solution alternative may, nevertheless, have a merit. It provides a "base-line" against which other, more "natural", solutions can be measured in terms of access and storage costs.
IDBD has the flexibility to optimize the design from a predefined, desired scope. Tests show that it is a valuable property in order to make a tool useful in a variety of practical design situations.
REFERENCES
ologies”, North Holland, 1982.
|
{"Source-Url": "http://www.vldb.org/conf/1982/P108.PDF", "len_cl100k_base": 9726, "olmocr-version": "0.1.48", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 38640, "total-output-tokens": 10476, "length": "2e13", "weborganizer": {"__label__adult": 0.00025153160095214844, "__label__art_design": 0.0006976127624511719, "__label__crime_law": 0.00027751922607421875, "__label__education_jobs": 0.0018177032470703125, "__label__entertainment": 5.793571472167969e-05, "__label__fashion_beauty": 0.0001379251480102539, "__label__finance_business": 0.0005345344543457031, "__label__food_dining": 0.0003101825714111328, "__label__games": 0.0004620552062988281, "__label__hardware": 0.000988006591796875, "__label__health": 0.0003466606140136719, "__label__history": 0.0002567768096923828, "__label__home_hobbies": 0.00011432170867919922, "__label__industrial": 0.0006113052368164062, "__label__literature": 0.0002465248107910156, "__label__politics": 0.0001908540725708008, "__label__religion": 0.00037217140197753906, "__label__science_tech": 0.052978515625, "__label__social_life": 7.027387619018555e-05, "__label__software": 0.0214080810546875, "__label__software_dev": 0.9169921875, "__label__sports_fitness": 0.0002009868621826172, "__label__transportation": 0.00038695335388183594, "__label__travel": 0.00015866756439208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38354, 0.05918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38354, 0.32203]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38354, 0.88139]], "google_gemma-3-12b-it_contains_pii": [[0, 2073, false], [2073, 6083, null], [6083, 8527, null], [8527, 10853, null], [10853, 13575, null], [13575, 16279, null], [16279, 18742, null], [18742, 23308, null], [23308, 25030, null], [25030, 28730, null], [28730, 32074, null], [32074, 34829, null], [34829, 38049, null], [38049, 38354, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2073, true], [2073, 6083, null], [6083, 8527, null], [8527, 10853, null], [10853, 13575, null], [13575, 16279, null], [16279, 18742, null], [18742, 23308, null], [23308, 25030, null], [25030, 28730, null], [28730, 32074, null], [32074, 34829, null], [34829, 38049, null], [38049, 38354, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38354, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38354, null]], "pdf_page_numbers": [[0, 2073, 1], [2073, 6083, 2], [6083, 8527, 3], [8527, 10853, 4], [10853, 13575, 5], [13575, 16279, 6], [16279, 18742, 7], [18742, 23308, 8], [23308, 25030, 9], [25030, 28730, 10], [28730, 32074, 11], [32074, 34829, 12], [34829, 38049, 13], [38049, 38354, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38354, 0.08502]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
eaa1e7be3d3be2b2131c0c2f0f5ea5edc67d7e30
|
Sam Cogan
Business Information Systems Year 4
07702213
HTML-U
www.tinypages.ie/htmlu2
Technical Report
1 Executive Summary
This project was created to enable beginning web designers get a feel for what HTML and CSS is, and the basics of designing and running a website. The web app encourages users to create a profile and log in, to avail of the entire range of services. The sign up process allows the user to choose how they think they learn best, and angles their profile to suit them, by pointing them towards key places on the site. If a user chooses to do project based learning for example, they can track their progress in their profile. The website was built with HTML and CSS, and uses PHP to interact with an online SQL database.
The full range of features includes:
- ‘Tip of the day’ section on every page
- A code archive section, with semantics of CSS and HTML tags
- User profiles that change depending on the user type
- Projects with progress tracking
- Forum
- Social networking (Facebook/Twitter)
2 Introduction
2.1 Background
This application was built because currently there are no real user friendly sites for complete beginners to web design. Web development is one of the few industries thriving in recession, and, with the new web standards, it is increasingly important to have a good knowledge of the basics of HTML and CSS before you start moving to more complex topics.
The application is web based, and users can log in and out from multiple devices and any location.
HTML-U is designed and created using HTML and CSS. It also uses PHP interacting with an online SQL database to perform the login, signup, project tracking and other processes.
The customer interested in HTML-U is Merrill Goussot, team lead in a Dublin based web Development Company. He is also an entrepreneur and has many websites both commercial and non profit. Merrill was approached in November, but was not able to commit to the project until January.
2.2 Aims
HTML-U aims to create a unique user experience for beginners in web development, following requirements and guidelines set down by the customer. It will be used by beginning web designers of all ages.
2.3 Technologies
As HTML-U is designed to give users a base in web development, it logically followed that the technologies used were those used in web development. The web app is built in HTML using CSS to design it. It also uses PHP interacting with an online SQL database. There are also elements of javascript, such as in the randomly shown ‘tip of the day’ section. It was designed and developed on a PC running Windows 7, using Notepad++, GIMP, Dropbox and others. It currently runs on hosting procured from Register365.
2.4 Structure
Chapter 1 is the executive summary, giving the user a quick overview of the document.
The 2nd chapter outlines the background and aims of the project.
The third chapter describes the technical aspects of the project.
The fourth chapter contains the views of the project.
The fifth chapter contains is filled with the views of the project for the future.
The Sixth chapter contains the bibliography of the all the resources used to complete the project.
The last chapter is the appendix, containing all extras called upon.
3 System
3.1 Requirements
3.1.1 Functional requirements
3.1.2 Requirement 1
System Display
3.1.2.1 Description & Priority
The system should display all features correctly. This feature is necessary for the acceptable functioning parameters of the system and is the highest ranked Functionality Requirement.
3.1.2.2 Requirement Activation
This requirement is essential as the web app is purely GUI based. This requirement is activated as soon as the user fully loads any page of the web app.
3.1.2.3 Technical issues
Multiple CSS designs will have to be created to accommodate all users, as well as accounting for users who don’t display images or have disabilities.
3.1.2.4 Risks
The main risk for this requirement is that not all users will be able to display the site correctly on their particular browser or system.
The most likely solution to this risk will be initial extensive testing, and creation of multiple CSS files to accommodate the most common systems and set ups. If a user is still unable to display the site correctly, they can log feedback with us through a feedback email system.
3.1.2.5 Functional Requirements
3.1.3 Requirement 2
Sign up/Profile creation
3.1.3.1 Description & Priority
Upon loading the app, users are prompted to register for fill features. This requires users entering information such as Name, Username, Password and what type of learning style that they prefer. This is a secondary requirement. Although a log in system and user profiles are integral parts to the marketing of this system, the site can function without them, and it is therefore made secondary.
3.1.3.2 Requirement Activation
The user will choose to sign up to the site. They will enter all their details (username, password etc) which will be logged in a database. Once they have their details logged, they can immediately log into their profile.
3.1.3.3 Technical issues
A database must be created in order to satisfy this requirement the database will store all the users information which includes username, passwords etc.
3.1.3.4 Risks
The user may enter the wrong information accidentally; if this happens the user will be able to edit their details from within their profile.
3.1.3.5 Dependencies with other requirements
None.
3.1.4 Data requirements
Data will be stored in two databases. One database for the user profiles, and one for all the data passed on the Forum.
3.1.5 Requirement
User database
3.1.6 Description & Priority
The site must successfully pass data to the database. This data requirement is of high priority.
3.1.7 User requirements
Users must have an internet connection and a device with a web browser
3.1.8 Usability requirements
Users had to be able to use the site and begin their learning with the most basic knowledge of HTML and CSS.
3.2 Design and Architecture
The system is held on a server, as an extension to the website tinypages.ie. Webspace was given by TinyPages Web Design. The website is split into 3 main categories: HTML, CSS, and FORUM pages. Each category has its own page with many links leading off it. Additionally a signed up user well have a members section, with the above mentioned categories in it. Until a user tries to access a members only area, the site runs purely on HTML and CSS, after that, PHP is used to access a Database held online.
### 3.3 Implementation
On the main site, for the users to sign up, log in, logout, and alter their profile, a database was created to store their information. This data is captured on sign up, and is called by a php file during log in etc. All data stored in the database is encrypted, and the database itself is routinely backed up.
The site works as follows:
**Sign up**
1. The User fills out the signup form and hits submit.
2. The form uses PHP to process the submitted information and checks that the passwords match/all fields have been filled etc.
3. If the submitted data passes this test, username, password and usertype are passed to the database.
4. A few additional columns are added to that user also, (for Project tracking) and are autofilled as NULL.
**Log in**
1. The User fills in a form with their username and password.
2. If both Username and Password match a row in the database, all that users info is pulled, and they are redirected to a ‘member’s page. A cookie is created for the session with a timeout of 6 minutes, should the user remain inactive.
3. The User’s profile is then displayed with a personalised welcome message. The PHP script also checks the user in the database and depending on the usertype originally submitted, a switch statement pulls a HTML page suitable for that usertype.
4. The User’s project progress is also displayed on the page. An IF statement determines that if a project is entered as NULL in the database, it will display as unfinished, however if a user finishes a project the database is updated, and a project completion is displayed.
**Logout**
1. If a user chooses to log out, PHP is used to effectively destroy the cookie held by the computer.
**Projects**
1. If a user completes a project, they can choose to tick it off on the project page. When a user hits ‘submit’ PHP checks the username held by the cookie, and also that the user is still logged in, and updates the database by adding a ‘1’ to the project row for that user.
Another part of the site that stores user information is the Forum. The Forum uses an entirely separate database, and is backed up separately.
3.4 **Graphical User Interface (GUI) Layout**
A Graphical User Interface (GUI) is implemented by HTML-U, and is compatible with peripheral devices like mice and keyboards, as well as touch screen devices such as iPad etc. This allows users of all ages, types, and importantly multiple platforms to access and browse the site as they please.
3.5 **Customer testing**
<table>
<thead>
<tr>
<th>Date:</th>
<th>People:</th>
<th>Event:</th>
</tr>
</thead>
<tbody>
<tr>
<td>07/1/11</td>
<td>Merrill and Sam</td>
<td>Merrill was unhappy with design, both navigation and aesthetics, worked through new design on paper</td>
</tr>
<tr>
<td>13/02/11</td>
<td>Merrill and Sam</td>
<td>Merrill was happy with progress, but wanted me to add the Forum asap</td>
</tr>
<tr>
<td>19/3/11</td>
<td>Merrill and Sam</td>
<td>Merrill was unhappy with progress, and we reviewed time scheme in light of exams</td>
</tr>
<tr>
<td>April</td>
<td>Merrill and Sam</td>
<td>Site was reviewed and redesigned to target only beginners to HTML and CSS</td>
</tr>
<tr>
<td>April</td>
<td>Merrill Andrew and Sam</td>
<td>Site architecture was reviewed by Merrill (with Andrew also) Andrew had doubts about system design and recommended using iFrames.</td>
</tr>
<tr>
<td>May</td>
<td>Colm and Sam</td>
<td>Discussed integration of Social networking, and decided to use only basic networking to begin. Also discussed total aesthetic redesign.</td>
</tr>
<tr>
<td>May</td>
<td>Merrill and Sam</td>
<td>Final meeting. Merrill approved of redesign, and progress. Forum was planned.</td>
</tr>
</tbody>
</table>
3.6 **Evaluation**
An online survey was used to evaluate HTML-U, results are as follows.
*results rounded up to integers
*survey undertaken by 42 unique users
1. 94% of people found the site Great or Good to navigate through.
2. 60% of people said that they would find this site useful
3. Users ranked User profiles, the Forum and Projects as the most important aspects of the site to them.
4. When users were asked if they would make any changes, the results were overwhelmingly in favour of more information being added.
The project was also evaluated by web developers from Facebook (independently from their work at FB), and the customer Merrill Goussot and all three echoed the idea of more features being added, while agreeing that functionality was fine. Many thanks to Andrew, Colm and Merrill for their time, as well as Dietmar and Orla for their input.
3.7 Discussions
From discussions with Orla Lahart, I gathered that at the very least a log in system would have to be implemented to make the project complex enough to be saleable.
Discussion with Dietmar Janetko lead to the idea of a Chat Bot as an FAQ system, unfortunately time constraints have lead to this not being implemented, although it is planned for the final project.
From discussion with Merrill I built the requirement spec, and planned a marketing and ad sales campaign for future builds.
The surveys lead to a focus on the profile system and forum.
4 Conclusions
HTML-U was both a challenging and rewarding project. As I have no PHP experience, and only basic knowledge of SQL I found the work at times to be almost insurmountable, however, keeping at it and simplifying what seemed like complex processes was the key to getting it done.
I feel that HTML-U is a very valid and useful product, and I will definitely be putting more time and improvements into it in the next few months. I feel that a fully functioning HTML-U would be a widely used product for web developers everywhere.
A number of NCI students have expressed an interest in using the site to improve or even restart their learning of web development and I will be remaining in contact with these students and using them as a testing group and developing the future site around their needs.
5 Further development or research
I plan to add a lot to HTML in the future. Some of the improvements are:
- More projects
- More extensive code archive
- Examples for code segments
- Video tutorials (members only)
- Facebook app
6 Bibliography
Murphy C & Perrson N (2007) *HTML and CSS Web Standards Solutions*. UK. All pages
7 Appendix
7.1 Project Proposal
Project Proposal
**HTML-U**
Sam Cogan
07702213
cogansam@gmail.com
Objectives
My objective is to create an interactive web application where people beginning their education in web development can learn the basics of web development, the best way to implement their skills, and all the tools they will need to begin the process.
I view my market as any institution that offers any sort of web design course that involves HTML and/or CSS. I feel that there is a need to create an application that caters to all teaching and learning styles that can both act as a supplement to a lecturer’s material or a source for all their HTML and CSS teaching needs. I plan to have both a useful free and open source aspect to the platform and a “premium account” aspect where lecturers can assess student’s needs and address them with readymade tutorials and project work.
Although I currently have interest from a Technical Analyst of Platform Operations, who has experience working with Facebook Ireland and Europe, I hope to target lecturers that feel that there is a need to develop such a system, and would be interested in guiding and supporting the project.
I hope to also develop interest in the application by creating additional Facebook applications to test users learning abilities and CSS and HTML knowledge.
In the long run I see this application becoming a service software package that I will aim at 2nd and 3rd level education systems, as well as those looking to start a career in web design.
My web application will consist of a few main elements
- A comprehensive archive of both supported and unsupported HTML and CSS codes, what browsers offer support, and the semantics of implementing the code.
- Useful code segments for more advanced users, so as they can quickly implement code.
- Tutorials and sporadic testing to ensure that users know the right way to code to current web standards.
- A tools section for those starting out with a comprehensive list of the software/hardware needed for a user to get their site online.
- A forum so that people can ask and answer more specific questions on HTML and CSS.
- A chatbot written into the main page where people can ask FAQ. I hope to develop the chatbot to a level where users can both hold a conversation and have the answers routed to their email.
- Later updates could see a premium members section, where lecturers can sign up to an account through PayPal, and receive lesson plans.
- Later updates could also see a search function added so as people can easily search for specific terms within the site.
Background
My interest in web design started when I was still in second level education, merely as a hobby. By the time I entered third level education, I could create a basic website purely in HTML. Up until this point I had never used CSS, or even knew what it was. When I began to learn about CSS in college, I found it quite difficult to pick up. Despite being no harder and no less interesting than HTML, I found it boring, repetitive and hard to learn. Despite having been one of the top in the class when we had covered HTML, whereas other people had struggled, I now found myself at the bottom, and struggling. It wasn’t until my third year in college that I understood why.
During my work experience, I was given the task of creating many websites, html emails, banners etc in HTML. As a result I was forced to learn CSS, and learn it on my own. My method was to look up forums/tutorials and code segments online. All through Google and all on my own. I learned more in my first week, than I had the three previous years in college. However, in contrast, my colleague that had joined with me, found this approach quite hard, and things would only really sink in when someone showed him, or told him how to do it. I realised that, although the two approaches yielded different results, neither approach was in any way better. I realised that there were many approaches to learning, and I wished that there was a system that would address this. That was when my idea for this project came about.
Although I realised that using two or three teaching methods in any sort of lecture or education system was not possible or feasible, I saw no reason as to why there shouldn’t be an online resource where you could choose the way to learn that suits you best, and learn at your own pace, as a standalone system, and/or a supplement to lectures. And so, I began to develop the idea of HTML-U, a series of tutorials, lectures and code archives, that help the user learn, at their own pace, in their own way. I feel that this application will address the need to be able to offer different materials to different students, some that may have difficulties learning in a widely used way.
Technical Approach
My project will run through a 6 step approach as such:
Requirement gathering and analysis
- During the first stage of the process I shall find a customer interested in supporting and guiding the project.
• I will look for expert analysis from two or more lecturers and experts on the subjects of artificial intelligence, education, design and marketing.
• My target user group will be researched by means of focus groups and online surveys. My key data will be taken from those with previous experience with HTML and CSS on all levels.
• Other similar projects will be looked into, so as to identify how to improve and differ so as to develop a niche gap in the market.
System Design
• During this process the application will be critically assessed, and from there on the system design will be prepared.
• Both the hardware/software/user requirements will be cemented and the system architecture will be laid down.
Implementation
• This is where my actual coding will begin. I shall break the project up into mini projects or modules (i.e. aesthetic design, search function etc.)
• Once each unit is created and tested on its own and once these meet specification standards the next step will commence.
Testing
• The modules will now be integrated into one cohesive project.
• This will then be extensively tested by both a testing group from my target market and myself.
Deployment
• Once the testing phase is over, the application will be delivered to the customer.
Maintenance
The site will undergo scheduled maintenance, to ensure that the best learning techniques are implemented, as well as making sure that all code is up to date and supported, and as HTML5 develops, an improved section will be added.
- General system maintenance such as backup of the forum and databases will occur.
- Problems that were not discovered during the development cycle will be logged and fixed as they happen.
**Special resources required**
**Books:**
- HTML and CSS Web Standards Solutions - Christopher Murphy, Nicklas Persson
- HTML Dog: The Best-Practice Guide to XHTML and CSS - Patrick Griffiths
- Head First HTML with CSS & XHTML - Eric T Freeman, Elizabeth T Freeman
- FBML Essentials: Facebook Markup Language Fundamentals - Jesse Stay
- Learning Styles
**Software:**
Text editor (Notepad ++)
Ftp software (Win Scp)
Webspace (domain name/hosting)
Chatbot software
GIMP (Picture editing suite)
XAMP
PhpBB Forum software:
**Hardware:**
Internet enabled PC
Wacom Graphics Tablet
**Technical Details**
The project will be written in HTML with CSS fully supporting it. There will be elements of PHP also, both in the chatbot, and in the web application itself, in the form of mini quizzes/tutorials. There will be a “Tips” toolbar on each page. These will run off javascript and will be randomly generated. Another language to be used is AIML. AIML will be used for coding the chat bot. The project will be tested and worked on by uploading to a webspace.
A main part of the site will be its login system, and user profiles. Users will be able to enter information on their profiles, and have it displayed, and their information will be held on a database. The form for the login system will use Html,Css,Php, MySql and JQuery.
**Main languages used:**
- Html
- Css
- Php
- MySql
<table>
<thead>
<tr>
<th>Milestones</th>
<th>Status</th>
<th>Completion</th>
</tr>
</thead>
<tbody>
<tr>
<td>Installation of components (phpMyAdmin etc)</td>
<td></td>
<td>28/09/10</td>
</tr>
<tr>
<td>Obtain Domain name and Hosting</td>
<td></td>
<td>01/10/10</td>
</tr>
<tr>
<td>Site design</td>
<td></td>
<td>10/10/10</td>
</tr>
<tr>
<td>Develop quiz logic</td>
<td></td>
<td>20/10/10</td>
</tr>
<tr>
<td>Database design</td>
<td></td>
<td>01/11/10</td>
</tr>
<tr>
<td>Create Log in Script</td>
<td></td>
<td>20/11/10</td>
</tr>
<tr>
<td>Develop user profiles</td>
<td></td>
<td>27/11/10</td>
</tr>
<tr>
<td>Incorporate Social networking tab (Like buttons, Retweets)</td>
<td></td>
<td>30/11/10</td>
</tr>
<tr>
<td>Add Code Archive (HTML, CSS)</td>
<td></td>
<td>7/12/10</td>
</tr>
<tr>
<td>Add tutorials</td>
<td></td>
<td>14/12/10</td>
</tr>
<tr>
<td>Add</td>
<td></td>
<td>21/12/10</td>
</tr>
<tr>
<td>projects/demonstrations</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add “premium” accounts.</td>
<td></td>
<td>14/01/11</td>
</tr>
<tr>
<td>Paypal</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add Search function</td>
<td></td>
<td>20/01/11</td>
</tr>
<tr>
<td>Develop and add Chat bot</td>
<td></td>
<td>31/01/11</td>
</tr>
<tr>
<td>Focus group/Analysis</td>
<td></td>
<td>01/02/11</td>
</tr>
<tr>
<td>Site review</td>
<td></td>
<td>03/02/11</td>
</tr>
<tr>
<td>Add forum</td>
<td></td>
<td>07/02/11</td>
</tr>
<tr>
<td>Develop facebook App</td>
<td></td>
<td>30/02/11</td>
</tr>
</tbody>
</table>
Consultation 1
Dietmar Janetzko.
I consulted Dietmar on my plans to build an educational web application, and discussed the fact that I wanted to integrate a chatbot into the main page of my website, as an alternate FAQ section. He advised me that a chatbot that routes its answers to a user's email would be a very interesting and useful way of implementing a chatbot, and that users would benefit greatly from this system.
Consultation 2
Orla Lahart
I consulted Orla on my plans to use a method of identifying how a user learns, and then pointing them to a specific section of the site based on those results. She advised me that a login system with profiles that store user information would be extremely useful to the user, as they would not have to take a quiz every time they entered the site. She advised me to use MySQL and PHP to build the script and to use the VARK system or similar existing system to calculate how users learn.
_________Sam Cogan 23/09/10_________
Signature of student and date
7.2 Requirement Specification
Title Requirements Specification (RS)
Document Control
<table>
<thead>
<tr>
<th>Date</th>
<th>Version</th>
<th>Scope of Activity</th>
<th>Prepared</th>
<th>Reviewed</th>
<th>Approved</th>
</tr>
</thead>
<tbody>
<tr>
<td>14/10/2005</td>
<td>1</td>
<td>Create</td>
<td>AB</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>21/10/05</td>
<td>2</td>
<td>Update</td>
<td>CD</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Distribution List
<table>
<thead>
<tr>
<th>Name</th>
<th>Title</th>
<th>Version</th>
</tr>
</thead>
<tbody>
<tr>
<td>Paul Stynes</td>
<td>Lecturer II</td>
<td>2</td>
</tr>
</tbody>
</table>
### Related Documents
<table>
<thead>
<tr>
<th>Title</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td>Title of Use Case Model</td>
<td></td>
</tr>
<tr>
<td>Title of Use Case Description</td>
<td></td>
</tr>
</tbody>
</table>
Table of Contents
1 Introduction .......................................................................................................................... 22
1.1 Purpose ............................................................................................................................. 22
1.2 Project Scope .................................................................................................................. 22
1.3 Definitions, Acronyms, and Abbreviations ................................................................... 22
2 User requirements definition .................................................................................................. 23
3 System architecture ................................................................................................................. 23
4 Requirements specification ....................................................................................................... 23
4.1 Physical environment requirements ............................................................................... 23
4.1.1 Requirement 1 <name of requirement in a few words> Error! Bookmark not defined.
4.1.2 Description & Priority .............................................................................................. Error! Bookmark not defined.
4.2 Interface requirements .................................................................................................. 23
4.2.1 Requirement 1 <name of interface requirement in a few words> Error! Bookmark not defined.
4.2.2 Description & Priority .............................................................................................. 23
4.3 Functional requirements ................................................................................................. 23
4.3.1 Requirement 1 <name of requirement in a few words> ................................. 3
4.3.2 Requirement 1 <name of requirement in a few words> ......................... 4
4.4 Documentation requirements ......................................................................................... 25
4.4.1 Requirement 1 <name of document requirement in a few words> 25
4.4.2 Description & Priority .............................................................................................. 25
4.5 Data requirements .......................................................................................................... 25
4.5.1 Requirement 1 <name of data requirement in a few words> .......... 5
4.5.2 Description & Priority .............................................................................................. 5
4.6 Non-Functional Requirements ....................................................................................... 25
4.6.1 Performance/Response time requirement Error! Bookmark not defined.
4.6.2 Availability requirement ......................................................................................... 25
4.6.3 Recover requirement ............................................................................................... 26
4.6.4 Robustness requirement Error! Bookmark not defined.
4.6.5 Security requirement .............................................................................................. 26
Introduction
Purpose
The purpose of this document is to set out the requirements for the development of a web based application with the purpose of teaching HTML and CSS. The intended customers are 2nd and 3rd level lecturers for use as both a supplemental reference system, and as a system that they can point their students to as a standalone learning system.
Project Scope
The scope of the project is to develop a fully functioning HTML and CSS educational system. The system shall have a number of main features such as:
- An initial test to determine the user’s specific learning style.
- A custom profile that takes into account the user’s learning style and creates a custom profile they can log into, which holds their details and learning style.
- Their profile points them to information that suits their learning style, but they can access all areas of the site if needed.
- Other main features include an extensive archive of both HTML and CSS code, an archive of professionally designed tutorials, and a set of immersive and useful projects and code segments.
There will also be a number of secondary features such as:
- A chat bot built into every page, as an alternate and personable method of creating an FAQ.
- A “tips” toolbar on every page, which will be randomly generated and show a large number of useful tips on all things HTML and CSS.
- A forum where users can ask each other more specialized questions and share advice and tips.
- A facebook app to promote the system and attract a larger fanbase.
John Smyth was involved in discussions with John Ryan from AN Company Ltd. To elicit the following requirements
This section also details any constraints that were placed upon the requirements elicitation process, such as schedules, costs, or the software engineering environment used to develop requirements.
Definitions, Acronyms, and Abbreviations
<table>
<thead>
<tr>
<th>Acronym</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>FB</td>
<td>Facebook</td>
</tr>
<tr>
<td>ES</td>
<td>Educational System</td>
</tr>
<tr>
<td>HTML</td>
<td>Hypertext Markup Language</td>
</tr>
</tbody>
</table>
User requirements definition
Having approached one of the potential customers I have compiled a list of user requirements as such:
- Secure registration and log in facility
- Profile creation and editing function
- Profile integrated user type detection
- HTML/CSS codebase, tutorials and projects
- Forum
- Interactive FAQ section
- Difficulty categories
Interface requirements
To the user, HTML-U will be completely GUI based. However at the back end, changes and updates will be made through both HTML and CSS as well as PHP interacting with a MySql database
Description & Priority
User interaction with GUI interface: Priority 1
Database interaction with PHP and profiles: Priority 2
Database interaction is a 2nd priority as the database will occasionally need maintenance and will have to be taken down without affecting the general workings of the site.
Functional requirements
Requirement 1 <System Display>
Description & Priority
The system should display all features correctly. This feature is necessary for the acceptable functioning parameters of the system and is the highest ranked Functionality Requirement.
**Requirement Activation**
The user should be able to see the website exactly the way it was intended. All items and images should be rendered correctly and in place.
**Technical issues**
Multiple CSS designs will have to be created to accommodate all users, as well as accounting for users who don’t display images or have disabilities.
**Risks**
The main risk for this requirement is that not all users will be able to display the site correctly on their particular browser or system.
The most likely solution to this risk will be initial extensive testing, and creation of multiple CSS files to accommodate the most common systems and set ups. If a user is still unable to display the site correctly, they can log feedback with us through a feedback email system.
**Functional Requirements**
*Use Case 1 .........<See appendix 1.0>*
**Requirement 1 <Log in/sign up system>**
**Description & Priority**
This is a secondary requirement. Although a log in system and user profiles are integral parts to the marketing of this system, the site can function without them, and it is therefore made secondary.
**Requirement Activation**
The user will choose to sign up to the site. They will enter all their details (username password etc) which will be logged in a database. Once they have their details logged, they can immediately log into their profile.
**Technical issues**
To stay logged in the user must have cookies enabled. To ensure that the user does so, a reminder will be written into the log in script.
**Risks**
Should the database go down due to a hosting error, or hacking/corruption, the log in system will be unusable. To prevent this, secure PHP is used and backups of the database will be made regularly.
Dependencies with other requirements
This log in system depends on the System Display requirement, and also depends on the database. If the database is down or corrupted the user will NOT be able to log in. (see Risks 2.2.2.4)
Functional Requirements
Use Case 2 ..........<See appendix 2>
Documentation requirements
There should be two types of documentation for HTML-U.
- User Docs: The user will be provided with an FAQ in the form of a chatbot, and also a downloadable text based FAQ if they so need.
- Admin Docs: As the system, and the forum grows, moderators and administrators will have to be added, and therefore documentation of all aspects of the code will have to be created. All code must be thoroughly commented and key lines will be highlighted in separate documentation.
Requirement 1 <User FAQ>
Description & Priority
User documentation: Not of great priority. A feature that can be added later in the project.
Data requirements
Data will be stored in two databases. One database for the user profiles, and one for all the data passed on the Forum.
Requirement 1 <User database>
Description & Priority
This data requirement is of high priority.
Non-Functional Requirements
Specifies any other particular non-functional attributes required by the system. Examples are provided below. Remove the requirement headings that are not appropriate to your project.
Availability requirement
The database must be available to the administrators at all times
Recover requirement
If the database gets hacked or is corrupted, backups will be held in secure locations and servers.
Security requirement
All PHP will be securely coded and user passwords and data will be encrypted in Md5 and stored securely.
Maintainability requirement
The database will be maintained on a regular basis. Pruning, security checks and backups will be performed weekly.
System models
This section presents a more detailed description of the system model. For example DFD, ERD, UC Model etc.
System evolution
The system will have to evolve in two main ways.
- As HTML 5 and its later versions become more widespread, as well as new releases and revisions of CSS and accompanying code.
- As new hardware and software get released and come into widespread use, (ie new browsers, graphics cards, screen resolutions) the site will have to be redesigned and changed.
Appendices
Use case 1
Use case
Sign up process
Scope
The scope of this use case is to allow a user to sign up to the web application
Description
This use case describes the process of signing up to HTML-U, and the database interactions that follow.
Use Case Diagram
See Appendix 1.
Flow Description
Precondition
The user accesses the application
Activation
This use case starts when an <Actor> chooses to select the ‘Sign up’ link on the application
Main flow
1. The system displays a sign up form.
2. The User enters their details, and verifies password.
3. The system logs the user’s details in a database.
Alternate flow
A1: <User incorrectly verifies password>
1. The User enters their password incorrectly into the password verification form.
2. The System notifies the User of their error, and returns them to the sign up page.
3. The use case continues at position 1 of the main flow
Termination
The system encrypts and logs the user’s details.
Post condition
The user is presented with a ‘log in’ screen.
Use case 1
Use case
Log in
Scope
The scope of this use case is to allow a user to log in to their profile
Description
This use case describes the process of logging in to HTML-U, and the database interactions that follow.
Use Case Diagram
See Appendix 2.
Flow Description
Precondition
The user accesses the application
Activation
This use case starts when a User chooses to select the ‘Log in’ link on the application
Main flow
1. The system displays a log in page
2. The user enters their username and password
3. The System verifies these details from the database.
**Alternate flow**
A1: <User incorrectly enters their information>
4. The User enters their details incorrectly.
5. The System notifies the User of their error, and returns them to the log in page.
6. The use case continues at position 1 of the main flow
**Termination**
The System displays the user’s profile.
**Post condition**
The user is presented with a ‘log in’ screen.
**Use case diagrams**
Appendix 1
Use case
Sign up process
```
User --> Register --> database
```
Appendix 2
Use case
Log in process
7.3 Monthly log book
Software Project:
HTML-U
Reflective Journal Part 2.
Scope: October 29th – November 26th
Milestone attempted: Working Log in system.
Working iFrame system
Status: Complete
Introduction
I spent the last few weeks perfecting the log in system to include sessions, perfected the log out system, and designed how user profiles would work for individuals.
My first step was to completely restart my log in script. My previous log in script had to many errors to bug test, and so, I began again, working through each line of code and making sure it was working before moving onto the next.
When my log in system was finished I had some students ‘alpha’ test it for bugs. This yielded one massive error which I had overlooked, in that logging out and then pressing the back button on browsers would return you to your profile. Not only this, but any further navigation through the user profile would result in an error message from the PHP.
After several hours trawling the internet and my code to get clues to why this error was happening, I stripped back the HTML from the PHP files, and found that it was the HTML causing a problem. I then realised that the HTML and PHP must be completely separate in the code for it to work. I spent several hours trying to figure out a way of having both HTML and PHP displaying on a page, and then came up with the solution: iFrames. iFrames are perfect, as they are a way of displaying a page within a page. Using Div’s with iFrames in them, and some complex CSS, I was able to quickly implement this fix and now the log in system works perfectly.
I finished this milestone before schedule and so was able to move on to looking at extras, like profile customisation etc.
**Conclusion**
In conclusion, I think that there was 1 main areas that hindered my success in achieving this milestone; however, I had already implemented the ‘alpha testing’ system, so this error was quickly found and fixed.
<table>
<thead>
<tr>
<th>Error No.</th>
<th>Description</th>
<th>Fix</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Error1:</strong></td>
<td>PHP and HTML clash</td>
<td>Keep PHP and HTML separate where possible.</td>
</tr>
<tr>
<td></td>
<td></td>
<td><strong>NB:</strong> Alpha testing greatly increased the speed in which this problem was found. Incorporate into testing protocol.</td>
</tr>
</tbody>
</table>
**Software Project:**
**HTML-U**
**Reflective Journal**
**Part 2**
Reflective Journal Part 2.
Scope: October 29th – November 26th
Milestone attempted: Working Log in system.
Working iFrame system
Status: Complete
---
Introduction
I spent the last few weeks perfecting the log in system to include sessions, perfected the log out system, and designed how user profiles would work for individuals.
My first step was to completely restart my log in script. My previous log in script had too many errors to bug test, and so, I began again, working through each line of code and making sure it was working before moving onto the next.
When my log in system was finished I had some students ‘alpha’ test it for bugs. This yielded one massive error which I had overlooked, in that logging out and then pressing the back button on browsers would return you to your profile. Not only this, but any further navigation through the user profile would result in an error message from the PHP.
After several hours trawling the internet and my code to get clues to why this error was happening, I stripped back the HTML from the PHP files, and found that it was the HTML causing a problem. I then realised that the HTML and PHP must be completely separate in the code for it to work. I spent several hours trying to figure out a way of having both HTML and PHP displaying on a page, and then came up with the solution: iFrames.
IFrames are perfect, as they are a way of displaying a page within a page. Using Div’s with iFrames in them, and some complex CSS, I was able to quickly implement this fix and now the log in system works perfectly.
I finished this milestone before schedule and so was able to move on to looking at extras, like profile customisation etc.
**Conclusion**
In conclusion, I think that there was 1 main area that hindered my success in achieving this milestone; however, I had already implemented the ‘alpha testing’ system, so this error was quickly found and fixed.
<table>
<thead>
<tr>
<th>Error No.</th>
<th>Description</th>
<th>Fix</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Error1:</strong></td>
<td>PHP and HTML clash</td>
<td>Keep PHP and HTML separate where possible.</td>
</tr>
<tr>
<td></td>
<td></td>
<td><strong>NB:</strong> Alpha testing greatly increased the speed in which this problem was found. Incorporate into testing protocol.</td>
</tr>
</tbody>
</table>
**Software Project:**
**HTML-U**
**Reflective Journal**
**Part 7**
*Sam Cogan*
*X07702213*
Reflective Journal Part 7.
Scope: March 25th – April 29th
Milestone attempted: Forum Implementation
Facebook/Twitter integration
Customer feedback
Status: Complete
--------------------------------------------------------------------------------------------
Overview
These last few weeks have been busy, as Facebook have again changed their policies on apps, pages
and like buttons. However, I was able to recode the social networking stuff quite easily, and integrate them nicely with
the look of the app. I also reinstalled and recoded the look and feel of the forum, as my files became corrupted, and my
backups oddly did not work.
My last task was to meet up with my customer, Merill Goussot (who is a blogger, web developer and
works with online advertising in his day to day job.) He seemed quite pleased with the progress, but suggested that we strip back the idea to target only
beginners in HTML and CSS. He expressed an interest in helping new developers not only learn code,
but have a good understanding of what it is to develop websites for clients, and the best practice of
doing so.
Conclusion
The meeting with Merill I feel was left off too long, due to time constraints on either side. However this
could have been fixed using Skype or free meeting software to discuss and showcase the project. This is
something I will improve on in future and I hope it will lead to a much improved final product
### 7.4 Other material Used
A survey was carried out to see what users thought of the site. As shown below:
#### HTML-U Usability
<table>
<thead>
<tr>
<th></th>
<th>Great</th>
<th>Good</th>
<th>Ok</th>
<th>Bad</th>
</tr>
</thead>
<tbody>
<tr>
<td>Design</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Navigation</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Information given</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Do you think that you would find this site useful?**
- [ ] Yes
- [ ] No
- [ ] Unsure
**Do you like the User profile feature? Why/Why not?**
- [ ] Yes
- [ ] No
- [ ] Haven't used it
**Why?**
_With regards to a site such as HTML-U, how important are the following features to you?_
<table>
<thead>
<tr>
<th>Feature</th>
<th>Very important</th>
<th>Important</th>
<th>Not very important</th>
<th>Unimportant</th>
</tr>
</thead>
<tbody>
<tr>
<td>User profiles</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Forum</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Code segments and explanation</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Very important</td>
<td>Important</td>
<td>Not very important</td>
<td>Unimportant</td>
</tr>
<tr>
<td>----------------------</td>
<td>----------------</td>
<td>-----------</td>
<td>--------------------</td>
<td>-------------</td>
</tr>
<tr>
<td>Projects</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Progress tracking</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Examples</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Social networking</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**What would you change about the HTML-U site?**
|
{"Source-Url": "http://trap.ncirl.ie/614/1/sam_cogan_-_technical_report.pdf", "len_cl100k_base": 10174, "olmocr-version": "0.1.50", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 61197, "total-output-tokens": 11061, "length": "2e13", "weborganizer": {"__label__adult": 0.00063323974609375, "__label__art_design": 0.00189971923828125, "__label__crime_law": 0.00030541419982910156, "__label__education_jobs": 0.0231475830078125, "__label__entertainment": 0.0001885890960693359, "__label__fashion_beauty": 0.0003733634948730469, "__label__finance_business": 0.001682281494140625, "__label__food_dining": 0.0007305145263671875, "__label__games": 0.0008702278137207031, "__label__hardware": 0.0012769699096679688, "__label__health": 0.0003838539123535156, "__label__history": 0.00043654441833496094, "__label__home_hobbies": 0.00032329559326171875, "__label__industrial": 0.0006136894226074219, "__label__literature": 0.0004773139953613281, "__label__politics": 0.00028896331787109375, "__label__religion": 0.0005254745483398438, "__label__science_tech": 0.0024566650390625, "__label__social_life": 0.00023603439331054688, "__label__software": 0.00751495361328125, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.00033020973205566406, "__label__transportation": 0.0007486343383789062, "__label__travel": 0.0003399848937988281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45707, 0.0252]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45707, 0.08937]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45707, 0.91749]], "google_gemma-3-12b-it_contains_pii": [[0, 103, false], [103, 2174, null], [2174, 3741, null], [3741, 5449, null], [5449, 6574, null], [6574, 6574, null], [6574, 8719, null], [8719, 11104, null], [11104, 12718, null], [12718, 13122, null], [13122, 13122, null], [13122, 15633, null], [15633, 18048, null], [18048, 19339, null], [19339, 20119, null], [20119, 21145, null], [21145, 22837, null], [22837, 24267, null], [24267, 24669, null], [24669, 28050, null], [28050, 28050, null], [28050, 30149, null], [30149, 31279, null], [31279, 33017, null], [33017, 34489, null], [34489, 35825, null], [35825, 36904, null], [36904, 37478, null], [37478, 37525, null], [37525, 38090, null], [38090, 40013, null], [40013, 40932, null], [40932, 42520, null], [42520, 43942, null], [43942, 45124, null], [45124, 45707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 103, true], [103, 2174, null], [2174, 3741, null], [3741, 5449, null], [5449, 6574, null], [6574, 6574, null], [6574, 8719, null], [8719, 11104, null], [11104, 12718, null], [12718, 13122, null], [13122, 13122, null], [13122, 15633, null], [15633, 18048, null], [18048, 19339, null], [19339, 20119, null], [20119, 21145, null], [21145, 22837, null], [22837, 24267, null], [24267, 24669, null], [24669, 28050, null], [28050, 28050, null], [28050, 30149, null], [30149, 31279, null], [31279, 33017, null], [33017, 34489, null], [34489, 35825, null], [35825, 36904, null], [36904, 37478, null], [37478, 37525, null], [37525, 38090, null], [38090, 40013, null], [40013, 40932, null], [40932, 42520, null], [42520, 43942, null], [43942, 45124, null], [45124, 45707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45707, null]], "pdf_page_numbers": [[0, 103, 1], [103, 2174, 2], [2174, 3741, 3], [3741, 5449, 4], [5449, 6574, 5], [6574, 6574, 6], [6574, 8719, 7], [8719, 11104, 8], [11104, 12718, 9], [12718, 13122, 10], [13122, 13122, 11], [13122, 15633, 12], [15633, 18048, 13], [18048, 19339, 14], [19339, 20119, 15], [20119, 21145, 16], [21145, 22837, 17], [22837, 24267, 18], [24267, 24669, 19], [24669, 28050, 20], [28050, 28050, 21], [28050, 30149, 22], [30149, 31279, 23], [31279, 33017, 24], [33017, 34489, 25], [34489, 35825, 26], [35825, 36904, 27], [36904, 37478, 28], [37478, 37525, 29], [37525, 38090, 30], [38090, 40013, 31], [40013, 40932, 32], [40932, 42520, 33], [42520, 43942, 34], [43942, 45124, 35], [45124, 45707, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45707, 0.13246]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
c64177abd1b55ad69798dfbf583649bf7e3e1448
|
The Impact of Disjunction on Query Answering Under Guarded-Based Existential Rules
Citation for published version:
Published In:
Informal Proceedings of the 26th International Workshop on Description Logics, Ulm, Germany, July 23 - 26, 2013
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
The Impact of Disjunction on Query Answering Under Guarded-based Existential Rules
Pierre Bourhis, Michael Morak, and Andreas Pieris
Department of Computer Science, University of Oxford, UK
firstname.lastname@cs.ox.ac.uk
Abstract. We give the complete picture of the complexity of conjunctive query answering under (weakly-)(frontier-)guarded disjunctive existential rules, i.e., existential rules extended with disjunction, and their main subclasses, linear rules and inclusion dependencies.
1 Introduction
Rule-based languages have a prominent presence in the areas of AI and databases. A noticeable formalism, originally intended for expressing complex queries over relational databases, is Datalog, i.e., function-free first-order Horn logic. Strong interest in enhancing Datalog with existential quantification in rule-heads emerged in recent years, see, e.g., [1–5]. This interest stems from the inability of plain Datalog to infer the existence of new objects which are not already in the extensional database [6]. The obtained rules are known under a variety of names such as existential rules, tuple-generating dependencies (TGDs), and Datalog\(^\pm\) rules. Unfortunately, the addition of existential quantification as above easily leads to undecidability of the main reasoning tasks, and in particular of conjunctive query answering [7]. Therefore, several concrete languages which guarantee decidability have been proposed, see, e.g., [1, 3, 5, 8–11]. Nevertheless, TGDs are not powerful enough for nondeterministic reasoning. For example, a simple and natural statement like “a child is a boy or a girl” cannot be expressed using TGDs; however, it can be easily expressed using the disjunctive rule \(\text{child}(X) \rightarrow \text{boy}(X) \lor \text{girl}(X)\).
Obviously, to be able to represent such kind of disjunctive knowledge, we need to enrich the existing classes of TGDs with disjunction in the head, or, equivalently, to consider disjunctive TGDs (DTGDs) [12]. Such an extension of plain Datalog (a.k.a. full TGDs), called disjunctive Datalog, has been studied in [13]. More recently, special cases of the problem of query answering under guarded-based DTGDs have been investigated [14, 15]. However, the picture of the computational complexity of the problem is still foggy, and there are several challenging issues to be tackled.
Our main goal is to better understand the impact of disjunction on query answering under the main guarded-based classes of TGDs, and how existing complexity results for TGDs are affected by adding disjunction. Notice that guardedness is a well-accepted paradigm, giving rise to robust languages that capture important lightweight description logics such as DL-Lite [16] and \(\mathcal{EL}\) [17]. In the present work, we concentrate on the following fundamental questions: what is the exact complexity of conjunctive query
CQ answering under (weakly-)(frontier-)guarded DTGDs [1, 9], and their main subclasses, i.e., linear DTGDs and disjunctive inclusion dependencies (DIDs) [5]? How is it affected if we consider a signature of bounded arity, or a fixed set of dependencies? Moreover, how is it affected if we pose the queries in a more expressive query language, in particular using unions of CQs (UCQs)? As we shall see, the addition of disjunction has a significant effect on the complexity of CQ answering. We show an unexpectedly strong lower bound, which is critical towards the closing of the above issues.
Our contributions can be summarized as follows:
1. We show that CQ answering for (weakly-)(frontier-) guarded DTGDs is 2ExpTime-complete in the combined complexity; this also holds for UCQs. Regarding the data complexity, we show that under frontier-guarded DTGDs it is coNP-complete, while for weakly-frontier-guarded it is ExpTime-complete. The upper bounds are obtained by exploiting results on expressive languages such as guarded negation first-order logic [18], while the lower bounds are inherited from existing results.
2. We show that CQ answering under a fixed set of DIDs is 2ExpTime-hard, even if restricted to predicates of arity at most three. In case of UCQs, the above result holds even for unary and binary predicates. These strong lower bounds are established by a reduction from an appropriate variant of the validity problem of CQs w.r.t. a Büchi automaton [19]. Together with the 2ExpTime upper bound discussed above, this gives us the complete picture for the complexity of our problem.
3. We investigate a natural fragment of DIDs with lower combined complexity. In fact, we consider frontier-one dependencies (i.e., only one variable is propagated from the body to the head), and we show that the combined complexity decreases to ExpTime-complete.
4. We show that frontier-guarded DTGDs, combined with negative constraints, are strictly more expressive than DL-LiteH_bool [20], one of the most expressive languages of the DL-Lite family. This allows us to show that query answering under DL-LiteH_bool is in 2ExpTime in combined complexity. The matching lower bound holds since our complexity results on DIDs imply that, for every description logic equipped with limited existential quantification, role inverse and union, query answering is 2ExpTime-hard.
2 Preliminaries
Technical Definitions. We define the following pairwise disjoint (infinite) sets: a set \( \Gamma \) of constants, a set \( \Gamma_N \) of labeled nulls, and a set \( \Gamma_V \) of regular variables. We denote by \( X \) sequences (or sets) of variables \( X_1, \ldots, X_k \). A relational schema \( R \) is a set of relational symbols (or predicates). A position \( r[i] \) in \( R \) is identified by \( r \in R \) and its \( i \)-th argument. A term \( t \) is a constant, null, or variable. An atom has the form \( r(t_1, \ldots, t_n) \), where \( r \) is a relation, and \( t_1, \ldots, t_n \) are terms. For an atom \( a \), we denote \( \text{terms}(a) \) and \( \text{var}(a) \) the set of its terms and the set of its variables, respectively; these extend to sets of atoms. Conjunctions and disjunctions of atoms are often identified with the
sets of their atoms. An instance $I$ for a schema $R$ is a (possibly infinite) set of atoms $r(t)$, where $r \in R$ and $t$ is a tuple of constants and nulls. A database $D$ is a finite instance such that $\text{terms}(D) \subseteq \Gamma$. We assume the reader is familiar with (unions of) conjunctive queries (UCQs). The answer to a UCQ $q$ over an instance $I$ is denoted $q(I)$. A Boolean UCQ $q$ has positive answer over $I$, denoted $I \models q$, if $\emptyset \in q(I)$.
**Disjunctive Tuple-generating Dependencies.** A disjunctive tuple-generating dependency (DTGD) $\sigma$ over a schema $R$ is a first-order formula of the form $\forall X \varphi(X) \rightarrow \bigvee_{i=1}^{n} \exists Y \psi_i(X, Y_i)$, where $n \geq 1$, $X \cup Y \subseteq \Gamma_V$, and $\varphi, \psi_1, \ldots, \psi_n$ are conjunctions of atoms over $R$; $\varphi$ is the body of $\sigma$, denoted $\text{body}(\sigma)$, while $\bigvee_{i=1}^{n} \psi_i$ is the head of $\sigma$, denoted $\text{head}(\sigma)$. If $n = 1$, then $\sigma$ is called tuple-generating dependency (TGD). For brevity, we will omit the universal quantifiers in front of DTGDs. An instance $I$ satisfies $\sigma$, written $I \models \sigma$, if whenever there exists a homomorphism $h$ such that $h(\varphi(X)) \subseteq I$, then there exists $i \in \{1, \ldots, n\}$ and $h' \supseteq h$ such that $h'(\psi_i(X, Y_i)) \subseteq I$; $I$ satisfies a set $\Sigma$ of DTGDs, denoted $I \models \Sigma$, if $I$ satisfies each $\sigma \in \Sigma$.
A DTGD $\sigma$ is guarded if there exists an atom $a \in \text{body}(\sigma)$, called guard, which contains all the variables occurring in $\text{body}(\sigma)$. Weakly-guarded DTGDs extend guarded DTGDs by requiring only the body-variables that appear at affected positions, i.e., positions at which a null value may appear during the disjunctive chase (see below) to appear in the guard; for the formal definition see [9]. The concept of frontier can be used to generalize (weakly-)guarded DTGDs. The frontier of a DTGD $\sigma$ is the set of variables $\text{var}(\text{body}(\sigma)) \cap \text{var}(\text{head}(\sigma))$. $\sigma$ is frontier-guarded if there exists an atom $a \in \text{body}(\sigma)$ which contains all the variables occurring in its frontier. The class of weakly-frontier-guarded DTGDs is defined analogously. A DTGD $\sigma$ is linear if it has only one body-atom. Disjunctive inclusion dependencies (DIDs) are obtained by restricting linear DTGDs as follows: the head is a disjunction of atoms (and not of conjunctions), and there are no repeated variables in the body or in the head.
**Query Answering.** The models of $D$ and $\Sigma$, denoted $\text{mods}(D, \Sigma)$, is the set of instances $\{I \models I \supseteq D \text{ and } I \models \Sigma\}$. The answer to a CQ $q$ w.r.t. $D$ and $\Sigma$, denoted $\text{ans}(q, D, \Sigma)$, is the set of tuples of constants $\bigcap_{I \in \text{mods}(D, \Sigma)} \{t \in q(I)\}$. The answer to a Boolean CQ $q$ w.r.t. $D$ and $\Sigma$ is positive, denoted $D \cup \Sigma \models q$, if $\emptyset \in \text{ans}(q, D, \Sigma)$. The answer to a UCQ $q$ w.r.t. $D$ and $\Sigma$ is defined analogously. The problem, called CQAns, tackled in this work is defined as follows: given a CQ $q$, a database $D$, a set $\Sigma$ of DTGDs, and a tuple of constants $t$, decide whether $t \in \text{ans}(q, D, \Sigma)$. The problem UCQAns is defined analogously. Notice that UCQAns for arbitrary CQs can be easily reduced to UCQAns for Boolean CQs, just by substituting the given tuple $t$ into the CQs; thus, we focus on Boolean CQs. The data complexity of the above problems is calculated taking only the database as input. For the combined complexity, the query and set of DTGDs count as input as well.
**Disjunctive Chase.** We employ the disjunctive chase introduced in [12]. Consider an instance $I$, and a DTGD $\sigma : \varphi(X) \rightarrow \bigvee_{i=1}^{n} \exists Y \psi_i(X, Y_i)$. We say that $\sigma$ is applicable to $I$ if there exists a homomorphism $h$ such that $h(\varphi(X)) \subseteq I$, and the result of applying $\sigma$ to $I$ with $h$ is the set $\{I_1, \ldots, I_n\}$, where $I_i = I \cup h'(\psi_i(X, Y_i))$, for each $i \in \{1, \ldots, n\}$, and $h' \supseteq h$ is such that $h'(Y)$ is a “fresh” null not occurring in $I$, for each $Y \in Y$. For such an application of a DTGD, which defines a single DTGD chase step, we write $I(\sigma, h)\{I_1, \ldots, I_n\}$. A disjunctive chase tree of a database $D$ and
a set \( \Sigma \) of DTGDs is a (possibly infinite) tree such that the root is \( D \), and for every
node \( I \), assuming that \( \{I_1, \ldots, I_n\} \) are the children of \( I \), there exists \( \sigma \in \Sigma \) and
a homomorphism \( h \) such that \( I(\sigma, h) \{I_1, \ldots, I_n\} \). The disjunctive chase algorithm
for \( D \) and \( \Sigma \) consists of an exhaustive application of DTGD chase steps in a fair fashion,
which leads to a disjunctive chase tree \( T \) of \( D \) and \( \Sigma \); we denote by \( \text{chase}(D, \Sigma) \) the
set \( \{I | I \text{ is a leaf of } T\} \). Note that each leaf of \( T \) is well-defined as the least fixpoint
of a monotonic operator. By construction, each instance of \( \text{chase}(D, \Sigma) \) is a model of \( D \)
and \( \Sigma \). Interestingly, \( \text{chase}(D, \Sigma) \) is a universal set model of \( D \) and \( \Sigma \), i.e., for each
\( I \in \text{mods}(D, \Sigma) \), there exists \( J \in \text{chase}(D, \Sigma) \) and a homomorphism \( h_J \) such that
\( h_J(J) \subseteq I \) [21]. This implies that, given a UCQ \( Q, D \cup \Sigma \models Q \) if \( I \models Q \), for each
\( I \in \text{chase}(D, \Sigma) \).
**Guarded Negation FO.** Guarded negation first-order logic (GNFO) restricts
first-order logic by requiring that all occurrences of negation are of the form \( \varphi \land \neg \varphi \), where \( \varphi \)
is an atom containing all the free variables of \( \varphi \) [18]. The formulas of GNFO are
generated by the recursive definition \( \varphi ::= r(t_1, \ldots, t_n) | t_1 = t_2 | \varphi_1 \land \varphi_2 | \varphi_1 \lor \varphi_2 | \exists X \varphi | \varphi \land \neg \varphi \). GNFO is strictly more expressive than guarded first-order logic (GFO) [22].
### 3 Known Results on Guarded-based DTGDs
We give an overview over known results, and we survey the best existing lower bounds
that can be immediately inherited. Our discussion is outlined in Table 1, where each row
corresponds to a fragment of DTGDs (which is decoded by substituting L for linear, G
for guarded, W for weakly and F for frontier), each column corresponds to a complexity
variant, and known completeness results are shown in boldface.
**Overview.** To the best of our knowledge, the only work done on query answering
under guarded-based disjunctive DTGDs can be found in [14] and [15]. The first paper
investigates the data complexity of query answering under (weakly-)guarded and linear
DTGDs. For weakly-guarded it is \( \text{ExpTime}-\text{complete} \), while for guarded and linear it is
\( \text{coNP}-\text{complete} \). Moreover, the case of atomic queries has been considered, and it was
shown that it is in \( \text{LogSpace} \). Notice that the above \( \text{coNP} \)-hardness is implicit in [23],
where it was shown that query answering under a TBox with a single axiom \( A \subseteq B \sqcap C \),
which is equivalent to \( A(X) \rightarrow B(X) \lor C(X) \), is \( \text{coNP}-\text{hard} \). The second paper studies
both the combined and data complexity of atomic query answering under guarded and
linear DTGDs. For guarded DTGDs the combined complexity is \( 2\text{ExpTime}-\text{complete} \),
while the data complexity is \( \text{coNP}-\text{complete} \) (which agrees with the analogous result
above). For linear DTGDs the combined complexity is \( \text{ExpTime}-\text{complete} \), while the
data complexity is in \( \text{AC}_0 \) (improving the \( \text{LogSpace} \) upper bound mentioned above).
<table>
<thead>
<tr>
<th>Combined complexity</th>
<th>Bounded arity</th>
<th>Fixed rules</th>
<th>Data complexity</th>
</tr>
</thead>
<tbody>
<tr>
<td>L/AD</td>
<td>( \text{ExpTime} )-hard</td>
<td>( H^2 )-hard</td>
<td>( H^2 )-hard</td>
</tr>
<tr>
<td>G</td>
<td>( 2\text{ExpTime} )-hard</td>
<td>( \text{ExpTime} )-hard</td>
<td>( H^2 )-hard</td>
</tr>
<tr>
<td>W-G</td>
<td>( 2\text{ExpTime} )-hard</td>
<td>( \text{ExpTime} )-hard</td>
<td>( \text{ExpTime} )-hard</td>
</tr>
<tr>
<td>F-G</td>
<td>( 2\text{ExpTime} )-hard</td>
<td>( 2\text{ExpTime} )-hard</td>
<td>( H^2 )-hard</td>
</tr>
<tr>
<td>W-F-G</td>
<td>( 2\text{ExpTime} )-hard</td>
<td>( 2\text{ExpTime} )-hard</td>
<td>( \text{ExpTime} )-hard</td>
</tr>
</tbody>
</table>
Table 1. Known complexity results for (U)CQA.
Notice that the AC0 upper bound was obtained by showing that the problem is first-order rewritable.
**Inherited Lower Bounds.** The best existing lower bounds for our problem are the following: (i) 2EXP\text{TIME} in combined complexity, and also EXP\text{TIME} in case of bounded arity, for guarded DTGDs [9], (ii) 2EXP\text{TIME} for frontier-guarded DTGDs in case of bounded arity [2, 24], (iii) EXP\text{TIME} for DIDs in combined complexity; this holds since the rules employed in [15] to prove an analogous result for linear DTGDs are DIDs, and (iv) $\Pi^2_P$ for fixed sets of DIDs; this follows from a result in [25] which states that query answering under fixed universal GFO sentences is $\Pi^2_P$-hard. Notice that in the proof of this result a sentence of the form $\forall X \forall Y \forall Z r(X, Y, Z) \rightarrow s(X, Y) \oplus s(X, Z)$ is used; however, the result holds even if we replace $\oplus$ with $\lor$ since the minimal models of $a \lor b$ coincide with those of $a \oplus b$.
4 The Complexity of Query Answering
Apart from the three known completeness results which are shown in boldface in Table 1, for all the other cases the exact complexity is unknown. We tackle these open problems, and we present a complete complexity picture.
4.1 Combined Complexity
**Upper Bound.** First, we establish an upper bound for query answering under the most expressive class that we treat in this paper, i.e., weakly-frontier-guarded DTGDs, by exploiting a result on satisfiability of GNFO.
**Theorem 1.** UCQAns under weakly-frontier-guarded DTGDs is in 2EXP\text{TIME} in combined complexity.
**Proof (sketch).** We provide a reduction to satisfiability of GNFO which is in 2EXP\text{TIME} [18]. First, we polynomially reduce our problem to UCQAns under frontier-guarded DTGDs by exploiting the reduction from weakly-frontier-guarded TGDs to frontier-guarded TGDs proposed in [2]. Thus, given a UCQ $Q$, a database $D$, and set $\Sigma$ of weakly-frontier-guarded DTGDs, there exists a polynomial translation $\tau$ such that $D \cup \Sigma \models Q$ iff $\tau(D) \cup \tau(\Sigma) \models \tau(Q)$, where $\tau(\Sigma)$ is a set of frontier-guarded DTGDs. It is easy to see that $\tau(\Sigma)$ can be equivalently rewritten as a GNFO formula [26].
More precisely, a frontier-guarded DTGD $\forall X \varphi(X) \rightarrow \exists Y \psi(X, Y)$ is equivalent to $\neg (\exists X \varphi(X) \land \neg \exists Y \psi(X, Y))$ which falls in GNFO since all the free variables of $\exists Y \psi(X, Y)$ appear in the frontier-guard of $\varphi(X)$. Moreover, $\neg \tau(Q)$ trivially falls in GNFO. Therefore, $\tau(D) \land \tau(\Sigma) \land \neg \tau(Q)$ is a GNFO formula and the claim follows since $\tau(D) \cup \tau(\Sigma) \models \tau(Q)$ iff $\tau(D) \land \tau(\Sigma) \land \neg \tau(Q)$ is unsatisfiable.
Notice that an alternative way to obtain the above result is to reduce our problem to query answering under GFO which is also in 2EXP\text{TIME} [25].
**Lower Bound.** Recall that CQAns for guarded TGDs is 2EXP\text{TIME}-hard [9] in combined complexity, while for frontier-guarded TGDs remains 2EXP\text{TIME}-hard even
in the case of bounded arity [2]. Although these results, together with Theorem 1, close the combined complexity for (weakly-)frontier-guarded, and also the case of bounded arity for (weakly-)frontier-guarded, they are not strong enough to complete the complexity picture of our problem. In what follows we present a series of strong $2\exp$-Time lower bounds for query answering. We assume the reader is familiar with Büchi automata and infinite trees (see, e.g., [27]).
**Theorem 2.** CQAns under DIDs is $2\exp$-Time-hard, even for predicates of arity at most two.
Before proving the above theorem, we first introduce the following intermediate result: Given a finite set of labels $A$, we define a schema $S = \{child, parentorchild\} \cup A$. These predicates are used to represent binary trees, with the obvious semantics. Given a CQ $q$ over $S$ and a Büchi tree automaton $T$ over binary trees, where all states are accepting and have at least one successor state, we define $q$ to be valid w.r.t. $T$, iff it holds that every (possibly infinite) binary tree accepted by $T$ entails $q$. We claim that deciding this problem is hard for $2\exp$-Time, which can be shown by adapting the $2\exp$-Time-hardness of the same problem for finite trees over schema $S' = \{child, descendent\} \cup A$ in [19].
**Proof (sketch).** The proof is by reduction from the validity problem of CQs $q$ over $S$ w.r.t. Büchi tree automata $T$ over binary trees, as defined above. We construct a database $D$, a set $\Sigma$ of DIDs, and a query $q' = q$ over schema $R$, such that $D \cup \Sigma \models q'$ iff $q$ is valid w.r.t. $T$. Let $S_T$ be the set of states of $T$. The schema $R$ is as follows: It includes $S$, and for each pair $(s, a)$, where $s \in S_T$ and $a \in A$, there are unary predicates $s$ and $p_{s,a}$ in $R$. Moreover, for each transition $(s, a) \rightarrow s_1, s_2$ in the transition function $\delta$ of $T$, we have binary predicates $child^i[(s, a), s_1, s_2]$, for $i \in \{1, 2\}$, in $R$. Intuitively, $child^i[(s, a), s_1, s_2](X, Y)$ says that $Y$ is the $i$-th child of $X$, where $X$ is in state $s$ and labelled $a$, and $Y$ is in state $s_i$. We now define $\Sigma$ as follows:
- For each $s \in S_T$, $s(X) \rightarrow \bigvee_{a \in A \text{ and } \delta(s,a) \neq a} p_{s,a}(X)$
- For each $(s, a) \in S_T \times A$, $p_{s,a}(X) \rightarrow a(X)$
- For all transitions $(s, a) \rightarrow s_1, s_2$:
- $p_{s,a}(X) \rightarrow \exists Y child^i[(s, a), s_1, s_2](X, Y)$
- $child^i[(s, a), s_1, s_2](X, Y) \rightarrow s_i(Y)$
- $child^i[(s, a), s_1, s_2](X, Y) \rightarrow child(X, Y)$
- $child(X, Y) \rightarrow parentorchild(X, Y)$
- $child(X, Y) \rightarrow parentorchild(Y, X)$
The database $D$ contains a single atom $s_I(c)$, where $s_I$ is the initial state of $T$. For each instance $I \in \text{chase}(D, \Sigma)$, if restricted to the $child$- and label-predicates only, by construction and due to the fact that each state of $T$ has at least one successor, this is an infinite binary tree accepted by $T$. Moreover, the parentorchild-predicate in $I$ is semantically equivalent to the parentorchild-relation of $T$, and therefore we get that $D \cup \Sigma \models q'$ only if $q$ is valid w.r.t. $T$. The converse direction follows from the fact that all states are accepting. □
Notice that the constructed set \( \Sigma \) of DIDs in the above proof depends on \( T \), and the underlying schema \( \mathcal{R} \) contains a predicate for every state and label of \( T \). This proof can now be extended, such that \( \Sigma \) is a fixed set of DIDs and \( \mathcal{R} \) a fixed schema with arity at most two. However, to devise this encoding, we need the expressive power of UCQs.
**Theorem 3.** UCQAns under fixed sets of DIDs is 2EXPTime-hard, even for predicates of arity at most two.
*Proof (sketch).* We adapt the proof of Theorem 2 as follows: Instead of labelling each tree node with a state \( s \) and label \( a \), we generate a chain of nodes, with the length of the chain encoding \( s \) and \( a \). This can be done by the DIDs \( \text{next}(X) \rightarrow \exists Y \text{ chain}(X,Y) \) and \( \text{chain}(X,Y) \rightarrow \text{next}(Y) \lor \text{end}(Y) \). Using this adaptation, neither the schema nor the set of DIDs depends on \( T \) any longer. The CQ of the proof of Theorem 2 can now be carefully adapted to this new encoding of states and labels, and also to check that: (i) any chain has length at most \( n \), where \( n \) is polynomial in the size of \( T \), and (ii) each node and its two children are consistent with the transition function of \( T \).
In the following, we will show that the above theorem holds also for CQs, at the expense of increasing the arity of the underlying schema by one. In the sequel, given a schema \( \mathcal{R} \), let \( \text{arity}(\mathcal{R}) \) be the maximum arity over all predicates of \( \mathcal{R} \). Notice that the following technical result holds for arbitrary DTGDs, and not just for DIDs.
**Lemma 1.** Let \( \mathcal{R} \) be a relational schema. Consider a UCQ \( Q \) over \( \mathcal{R} \), a database \( D \) for \( \mathcal{R} \), and a set \( \Gamma \) of DTGDs over \( \mathcal{R} \). We can construct in polynomial time a CQ \( q' \) over a schema \( \mathcal{R}' \), a database \( D' \) for \( \mathcal{R}' \), and a set \( \Sigma' \) of DTGDs such that \( \text{arity}(\mathcal{R}') = \text{arity}(\mathcal{R}) + 1 \), and \( D \cup \Gamma \models Q \text{ iff } D' \cup \Sigma' \models q' \).
*Proof (sketch).* The schema \( \mathcal{R}' \) is obtained from \( \mathcal{R} \) by increasing the arity of every predicate by one. Moreover, we add three predicates \( \text{or} \), \( \text{true} \) and \( \text{false} \). Each DTGD in \( \Gamma \) is adapted in such a way that it always propagates this additional position to the atoms in the head. Each CQ \( q_i \in Q \) is translated into a new CQ \( q'_i[X_i] \), where a fresh variable \( X_i \) is added to all atoms in \( q_i \) at the new position. The body of the new query \( q'_i \) is \( \text{false}(Z_1) \land q_i[X_i] \land \text{or}(Z_i, X_i, Z_{i+1}) \land \text{true}(Z_{k+1}) \), where \( k = |Q| \). The database \( D' \) is obtained from \( D \) by extending each atom in such a way that the fresh constant \( t \in \Gamma \) appears in the new position. Also, the atoms \( \text{true}(t) \) and \( \text{false}(f) \), where \( f \in \Gamma \) is a fresh constant, are added. Furthermore, we add an isomorphic image of every \( q'_i[X_i] \) to \( D' \), where \( X_i \) is replaced by \( f \). Finally, we add the atoms \( \text{or}(t,t,t) \), \( \text{or}(f,t,t) \), \( \text{or}(t,f,t) \) and \( \text{or}(f,f,f) \).
We can show that the above construction is correct. For any query \( q'_i[X_i] \), there exists a homomorphism mapping it to \( D' \). However, this is not useful to satisfy \( q'_i \), as \( X_i \) is mapped to \( f \). By construction however, the only way to satisfy \( q'_i \) is to map at least one subquery \( q'_i[X_i] \) to \( \text{chase}(D', \Sigma') \), such that \( X_i \) maps to \( t \). Note that the only atoms in \( \text{chase}(D', \Sigma') \) containing \( t \) are the ones obtained from the original copy of \( D \). Thus, we have that whenever a subquery \( q_i \in Q \) is true in a model of \( D \cup \Sigma \), then \( q_i \) is true in the corresponding model of \( D' \cup \Sigma' \), and the claim follows.
Theorem 3 and Lemma 1 immediately imply the following:
Corollary 1. CQAns under fixed sets of DIDs is 2EXP-TIME-hard, even for predicates of arity at most three.
Interestingly, the above corollary closes an open question stated in [25], regarding the complexity of query answering under fixed GFO sentences. It was shown that the problem in question is PSPACE-hard even for CQs, and in EXPTIME in case of acyclic CQs. However, the exact complexity was left as an open problem. Clearly, Corollary 1 gives a 2EXP-TIME-completeness result since query answering under GFO is in 2EXP-TIME in general. By combining Theorem 1 and Corollary 1, we get the following.
Corollary 2. (U)CQAns under (weakly-)(frontier-)guarded DTGDs, linear DTGDs and DIDs is 2EXP-TIME-complete in combined complexity. This holds even for predicates of arity at most three, and for fixed sets of dependencies.
4.2 Data Complexity
As already discussed in Section 3, for guarded and weakly-guarded DTGDs the data complexity is coNP-complete and EXP-Time-complete, respectively. Below, we show that it remains the same for (weakly-)frontier-guarded DTGDs.
Theorem 4. (U)CQAns under frontier-guarded DTGDs is coNP-complete, while for weakly-frontier-guarded DTGDs it is EXP-Time-complete in data complexity.
Proof (sketch). The coNP upper bound is obtained by reducing our problem to UC-QAns under GFO sentences. This can be done by employing the linear reduction of CQAns under frontier-guarded TGDs to UCQAns under GFO sentences given in [2]. The lower bound follows immediately since CQAns under DIDs is already coNP-hard. Consider now a UCQ $Q$, a database $D$, and set $\Sigma$ of weakly-frontier-guarded DTGDs.
First, we reduce our problem to UCQAns under frontier-guarded DTGDs by replacing the non-affected variables in rules with all possible constants in $D$. Clearly, the obtained set $\Sigma'$ is of exponential size in the number of non-affected variables, but of polynomial size in $|\text{terms}(D)|$. As discussed above, a linear translation $\tau$ exists such that $D \cup \Sigma' \models Q$ iff $D \cup \tau(\Sigma') \models \tau(Q)$, where $\tau(\Sigma')$ is a GFO sentence and $\tau(Q)$ a UCQ. It is important to say that, although $|\tau(Q)|$ depends on $D$, the size of each CQ of $\tau(Q)$ does not depend on $D$. As shown in [25], UCQAns under GFO is in 2EXP-Time w.r.t. to the size of each CQ of the given UCQ and the maximum arity of the schema, and in EXPTime w.r.t. to the size of the sentence. Since the size of each query of $\tau(Q)$ and the maximum arity of the schema are constant, and the size of $\tau(\Sigma')$ is polynomial in $D$, we get an EXPTime upper bound w.r.t. $D$. The lower bound follows immediately since UCQAns for weakly-guarded TGDs is EXPTime-hard [9].
5 Reducing the Complexity
In this section, we demonstrate a way of reducing the combined complexity of query answering under DIDs. We consider frontier-one DIDs, i.e., DIDs with a frontier of cardinality exactly one, for which the complexity is EXPTime-complete. Notice that the class of frontier-one TGDs has been proposed in [1]. Clearly, frontier-one formalisms are quite close to DL axioms since concept inclusions propagate only one object.
Theorem 5. (U)CQAns for frontier-one DIDs is \textsc{ExpTime}-complete in combined complexity.
Proof (sketch). Consider a database \( D \), and a set \( \Sigma \) of frontier-one DID s. It is possible to associate a tree structure with every instance \( I \in \text{chase}(D, \Sigma) \): \( I \) is partitioned into bags of atoms, such that \( I' \) is one such bag, and for each term \( t \) occurring in an atom of \( I \setminus I' \), there exists a bag, denoted \( \text{bag}(t) \), such that the atoms in \( \text{bag}(t) \) contain \( t \); \( t \) is called the input value of \( \text{bag}(t) \). These bags are used as labels of the tree structure, such that the bag containing \( I' \) labels the root, and two bags \( \text{bag}(t_1), \text{bag}(t_2) \) are in a parent-child relation iff there exists an atom containing \( t_2 \) in \( \text{bag}(t_1) \). Due to the monadic nature of frontier-one DID s, the number of atoms in each bag is polynomial in \( D \) and \( \Sigma \) and the number of isomorphic bags is exponential in the size of \( \Sigma \). The above tree structure was introduced in [28] for finite instances, but can be extended to infinite trees. It is therefore possible to show that there exists a Rabin automaton (see, e.g., [27]) that is empty iff no instance of \( \text{chase}(D, \Sigma) \) satisfies \( q \). This tree automaton represents the tree structure of the instances of \( \text{chase}(D, \Sigma) \) that do not satisfy \( q \). Moreover, the size of the automaton is exponential in \( D \) and \( \Sigma \) due to the fact that, as shown in [28], for every instance \( I \in \text{chase}(D, \Sigma) \) with \( I \not\models q \), the tree structure is diversified (i.e., there is no bag except the root that contains two atoms with the same predicate, and there are no directly related bags sharing a term at the same position). Given that Rabin automata can be checked for emptiness in linear time, we establish the desired upper bound.
The lower bound is obtained by a careful adaptation of the proof of Theorem 2 in order to simulate a \textsc{PSpace} alternating Turing machine using a chain of length \( n \), instead of a binary tree of depth \( n \), to store the configurations.
Another formalism with a lower combined complexity is the class of full-identity DID s, that is, rules of the form \( r(X_1, \ldots, X_n) \rightarrow \bigvee_{i=1}^{m} p_i(X_1, \ldots, X_n) \) which allow us only to copy a tuple. It is easy to show that the combined complexity reduces to \textsc{coNP}-complete. As we are not able to permute terms, each instance of the chase is of polynomial size in the database and the schema. Thus, it suffices to guess such an instance \( I \), and check that it does not entail the query. The lower bound is implicit in [23], where it was shown that query answering under rules of the form \( A(X) \rightarrow B(X) \lor C(X) \) is \textsc{coNP}-hard in data complexity. Notice that query answering under DID s where each rule is frontier-one or full-identity is \textsc{ExpTime}-complete.
6 Relationships with Existing DLs
As already shown in [15], guarded DTGDs are strictly more expressive than \textsc{ELI} [29], that is, the well-known DL \textsc{EL} extended with disjunction. It is indeed straightforward to see that every normalized \textsc{ELI} TBox, which may contains axioms of the form \( A \sqsubseteq B \), \( A \sqcap B \sqsubseteq C \), \( A \sqsubseteq \exists R.B \) and \( A \sqsubseteq B \sqcup C \), where \( A, B, C \) are concept names and \( R \) is a role name, can be translated in logarithmic space into a set of guarded DTGDs.
The goal of this section is to show an analogous result for \textsc{DL-Lite} \textsc{H} \textsc{bool} [20], one of the most expressive languages of the DL-Lite family, and also to investigate the impact of our previously established results on query answering under description logics,
for every concept inclusion. Frontier-guarded DTGDs can be constructed in polynomial time such that τ restrictions, are strictly more expressive than DL-Lite query answering under frontier-guarded DTGDs; see, e.g., [5]. Since deciding whether the given set of dependencies is consistent can be reduced to query answering under frontier-guarded ⊥ of the form ∀malism obtained by combining frontier-guarded DTGDs with negative constraints (NCs).
Proof (sketch). Let ψ = τ(C ⊆ D). Viewed as an implication, we can treat the left-hand side as a conjunction and the right-hand side as a disjunction of subformulas. Whenever such a subformula φ(X) is not atomic or existential (or a negated version of the two), we introduce a new auxiliary predicate tφ(X) and two implications φ(X) ↔ tφ(X). If φ is itself a disjunction, we additionally split the first implication (i.e., where φ(X) is on the left) into separate implications for each disjunct, preserving the right-hand side. We then recursively apply this transformation to the new implications, until a fixpoint is reached. The resulting implications, after removing negations by converting negated conjuncts on the left to non-negated disjuncts on the right (and conversely for disjuncts on the right) are clearly DTGDs or NCs.
Lemma 2 implies that ΣC = ∪α∈TC Σα is a set of frontier-guarded[⊥] DTGDs which can be constructed in polynomial time, and it is equisatisfiable with TC. Thus, τ(TR) ∪ ΣC is a set of frontier-guarded[⊥] DTGDs which is equisatisfiable with T. Since s(X) → r(X, X) is not expressible in DL-LiteN boolean, the next result follows.
Theorem 6. Frontier-guarded[⊥] DTGDs are strictly more expressive than DL-LiteN boolean.
Complexity Results. By exploiting the above construction, query answering under DL-LiteN boolean can be reduced in polynomial time to CQAns under frontier-guarded[⊥] DTGDs. Recall that an ABox A is a finite set of assertions of the form Aκ(aκ), ¬Aκ(aκ), Pκ(aκ, aκ), and ¬Pκ(aκ, aκ) ; the semantics of A are defined by τ. A TBox T together with A constitute a knowledge base (KB) K = ⟨T, A⟩.
Lemma 3. **UCQAns under DL-Lite\textsuperscript{H\textsubscript{bool}}** knowledge bases can be reduced in polynomial time to UCQAns under frontier-guarded\([\bot]\) DTGDs.
**Proof (sketch).** Consider a DL-Lite\textsuperscript{H\textsubscript{bool}} KB \( K = \langle T, A \rangle \). Let \( D_A \) be the database obtained from \( A \) by replacing each negated atom \( \neg A(a) \) and \( \neg R(a, b) \) with \( A\neg(a) \) and \( R\neg(a, b) \), respectively, where \( A\neg \) and \( R\neg \) are auxiliary predicates. Let \( \Sigma_T = \tau(T_R) \cup \Sigma_C \cup \Sigma_\bot \), where \( \Sigma_\bot \) contains an NC \( A(X) \), \( A\neg(X) \rightarrow \bot \) and \( R(X, Y) \), \( R\neg(X, Y) \rightarrow \bot \), for each concept \( A \) and role \( R \) in \( T \), respectively. It is not difficult to verify that \( K \models Q \iff D_A \cup \Sigma_T \models Q \), for every UCQ \( Q \) over \( K \). Since \( \Sigma_T \) is a set of frontier-guarded\([\bot]\) DTGDs that can be constructed in polynomial time, the claim follows.
It is interesting to observe that the rules employed in the proof of Theorem 2, can be easily rewritten as DL axioms. This immediately gives us the following lower bound.
**Theorem 7.** Let \( \mathcal{L} \) be a DL able to express inclusions of the form \( C_1 \subseteq C_2 \cup C_3, C \subseteq \exists R, \exists R \subseteq C \), \( R_1 \subseteq R_2 \) and \( R_1 \subseteq R_2^- \), where \( C, C_i \) are concepts and \( R, R_i \) are roles. Then, CQA\ns under \( \mathcal{L} \) is 2\text{ExpTime}-hard.
In [20] it was shown that query answering under DL-Lite\textsuperscript{H\textsubscript{bool}} is coNP-complete in data complexity; however, the combined complexity was not investigated and left as an open problem. Since DL-Lite\textsuperscript{H\textsubscript{bool}} is a description logic equipped with limited existential quantification, role inverse and union, Theorems 2 and 7, together with Lemma 3, imply the next complexity result.
**Corollary 3.** CQA\ns under DL-Lite\textsuperscript{H\textsubscript{bool}} knowledge bases is 2\text{ExpTime}-complete in combined complexity.
Interestingly, the above corollary significantly strengthens a similar result for the A\(LCT \) DL in [24].
7 Conclusion
We studied the query answering problem under (weakly-)(frontier-)guarded disjunctive TGDs and their main subclasses. Interestingly, query answering under a fixed set of disjunctive IDs is already 2ExpTime-hard. We also investigated the impact of our results on query answering under DL-based formalisms; in particular, we showed that this problem for DLs equipped with limited existential quantification, role inverse and union is 2ExpTime-hard. Regarding future work, we intend to study the impact of the addition of disjunction to non-guarded-based classes of TGDs, in the same complete fashion as in this paper.
Acknowledgements. Pierre Bourhis acknowledges his EPSRC Grant EP/H017690/1 “Query-Driven Data Acquisition from Web-based Datasources”, Michael Morak his DOC Fellowship of the Austrian Academy of Sciences, and Andreas Pieris his EPSRC Grant EP/G055114/1 “Constraint Satisfaction for Configuration: Logical Fundamentals, Algorithms and Complexity” and ERC Grant 246858 “DIADEM”.
References
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/28105507/paper_15.pdf", "len_cl100k_base": 10977, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 54497, "total-output-tokens": 13580, "length": "2e13", "weborganizer": {"__label__adult": 0.0005192756652832031, "__label__art_design": 0.0005669593811035156, "__label__crime_law": 0.0008306503295898438, "__label__education_jobs": 0.00243377685546875, "__label__entertainment": 0.00022983551025390625, "__label__fashion_beauty": 0.0003159046173095703, "__label__finance_business": 0.0006160736083984375, "__label__food_dining": 0.000766754150390625, "__label__games": 0.0016574859619140625, "__label__hardware": 0.000941753387451172, "__label__health": 0.001392364501953125, "__label__history": 0.0005712509155273438, "__label__home_hobbies": 0.0001798868179321289, "__label__industrial": 0.0008635520935058594, "__label__literature": 0.0012884140014648438, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0007805824279785156, "__label__science_tech": 0.393310546875, "__label__social_life": 0.00019752979278564453, "__label__software": 0.01555633544921875, "__label__software_dev": 0.57470703125, "__label__sports_fitness": 0.000370025634765625, "__label__transportation": 0.0010280609130859375, "__label__travel": 0.0003094673156738281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43515, 0.0256]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43515, 0.33551]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43515, 0.84124]], "google_gemma-3-12b-it_contains_pii": [[0, 1200, false], [1200, 4088, null], [4088, 7471, null], [7471, 11989, null], [11989, 16374, null], [16374, 19541, null], [19541, 22876, null], [22876, 27090, null], [27090, 30261, null], [30261, 34183, null], [34183, 36271, null], [36271, 39538, null], [39538, 42981, null], [42981, 43515, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1200, true], [1200, 4088, null], [4088, 7471, null], [7471, 11989, null], [11989, 16374, null], [16374, 19541, null], [19541, 22876, null], [22876, 27090, null], [27090, 30261, null], [30261, 34183, null], [34183, 36271, null], [36271, 39538, null], [39538, 42981, null], [42981, 43515, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43515, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43515, null]], "pdf_page_numbers": [[0, 1200, 1], [1200, 4088, 2], [4088, 7471, 3], [7471, 11989, 4], [11989, 16374, 5], [16374, 19541, 6], [19541, 22876, 7], [22876, 27090, 8], [27090, 30261, 9], [30261, 34183, 10], [34183, 36271, 11], [36271, 39538, 12], [39538, 42981, 13], [42981, 43515, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43515, 0.04167]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
7fde7d3b36552051454fff8efbc611871cff5e46
|
Abstract—Motivated by the problem of stateless web tracking (fingerprinting), we propose a novel approach to hybrid information flow monitoring by tracking the knowledge about secret variables using logical formulae. This knowledge representation helps to compare and improve precision of hybrid information flow monitors. We define a generic hybrid monitor parametrised by a static analysis and derive sufficient conditions on the static analysis for soundness and relative precision of hybrid monitors. We instantiate the generic monitor with a combined static constant and dependency analysis. Several other hybrid monitors including those based on well-known hybrid techniques for information flow control are formalised as instances of our generic hybrid monitor. These monitors are organised into a hierarchy that establishes their relative precision. The whole framework is accompanied by a formalisation of the theory in the Coq proof assistant.
I. INTRODUCTION
Web tracking refers to a collection of techniques that allow websites to create profiles of its users. While such profiles might be useful for personalized advertising, it is generally considered a problem which brings user privacy under attack. Whenever a user opens a new webpage, she has no way to know whether she is being tracked and by whom. The recent survey by Mayer and Mitchell [22] classifies the mechanisms that are used to track a user on the web. Web tracking technologies can be roughly divided into two groups: stateful and stateless. Stateful trackers store information (e.g., cookies) on the user’s computer. Several groups of researchers have reported on the usage of different stateful trackers on popular websites [2], [21], [31] and have found that some third-party analytics services were using these mechanisms to recreate the cookies in case they are deleted [30]. On the legal side, the European Union amendment to ePrivacy Directive 2009/136/EC was accepted and several proposals on web tracking were made [8], [10], [17], [26]. As a consequence, many web sites now explain their cookie policy but so far these regulations impose concrete restrictions only on stateful tracking technologies.
Stateless technologies (often called fingerprinting) collect information about the user’s browser and OS properties, and can distinguish users by these characteristics. The calculation of the amount of identifying information is based on information theory. Eckersley demonstrated by his Panopticlick project [11] that such identification is quite effective.
A. Fingerprinting Example
For a simple illustration of fingerprinting consider the code snippet from Figure 1. A test name = "FireFox" schematically represents a testing of the browser name. Another test fonts = fontsSet1 schematically represents a check whether the installed fonts on the browser are the same as in some fontsSet1. Clearly, the information about the browser’s name does not make its user uniquely identifiable. However, a precise list of fonts installed in the user’s browser makes a user much more distinguishable: Eckersley [12] demonstrated in his experiments that few users share identical list of fonts.
x := 1; y := 1;
if (name = "FireFox") then y := 0;
if (y = 1) then x := 0;
if (fonts = fontsSet1) then
if (y = 1) then x := 2;
output x;
Figure 1: A possible fingerprinting code
Consider an execution of this program when name="FireFox" and fonts=fontsSet1. On line 2, y is assigned 0, hence the tests (y = 1) fail on lines 3 and 5, and therefore this execution induces an output x = 1. When a tracker observes this output, she concludes that name="FireFox", however she does not learn anything about the fonts since the code on line 5 is dead code for all executions where name="FireFox". One possible protection from fingerprinting is to put a threshold on the amount of information a tracker learns about the browser features. For example, we could allow the execution described above because a tracker only learns
1This test corresponds to the call of navigator.appName browser API.
that name="FireFox", but reject the executions where a tracker learns about the installed fonts.
An efficient way to control program executions is by dynamic monitoring. Such monitors over-approximate the information leakage associated with the program output. Consider an execution described above: a dynamic monitor would only observe that lines 1, 2, 4 and 6 were executed and possibly know that the tests \((y = 1)\) on lines 3 and 5 have failed. To provide a sound over-approximation of the leakage, a dynamic monitor could only conclude that an output depends on all the events in the execution trace. Hence, for this execution it would state that the tracker learns that \(name = "FireFox"\) and \(fonts=fontsSet1\). This protection mechanism would then erroneously reject the program execution when \(x = 1\), because it would conclude that the tracker also learns that \(fonts=fontsSet1\) whereas it is not necessarily the case: the execution where the fonts are different also result in the same output.
In this paper, we present several hybrid monitoring mechanisms that statically analyse non-executed branches of the program in order to provide a more precise approximation of the leakage. For example, the most precise of our hybrid monitors concludes that when a tracker observes that \(x = 1\), she only learns that \(name = "FireFox"\). The protection mechanism would then allow such program execution. We present the results of our monitors for this program and show the amount of information contained in the program output in Section VII (see Program 1 of Table VIII-B).
B. Threat Model
Our web tracker extends the gadget attacker [1] model. Like a gadget attacker, a web tracker owns one or more web servers, where the fingerprinting scripts are located. He promotes the inclusion of these scripts into the web pages, offering a tracking or an advertisement service to websites. A web tracker does not have any special network abilities: he can only send and receive network messages from the server under his control.
Except for the gadget attacker capabilities, a web tracker has one distinctive property: he owns a database of the browser fingerprints. Therefore, a web tracker is able to compute the probability distributions for the browser properties that have been fingerprinted. These distributions could also be obtained from other sources, such as Panopticlick [11].
Another important assumption of our framework is that we shall disregard information leaks related to execution speed and termination.
Fingerprinting scripts are essentially programs, so within the program analysis realm we assume that the web tracker knows the probability distributions of the secret variables, knows the program source code and observes the output of the program.
C. Fingerprinting Protection
A straightforward counter-measure against fingerprinting is to set a threshold on the quantity of information the user agrees to leak and thus decide upon her level of anonymity. In a basic scenario, to protect from fingerprinting, we would suppress an output with a leakage above the threshold and halt the program. We discuss alternatives for protection and possible security guarantees in Section VII-B.
Depending on the browser configuration, the same program might leak very different amounts of information. Our goal is to run a program for a browser user whose browser configuration does not incur a leakage exceeding the threshold. To achieve this goal, we need a definition of information leakage which is sensitive to the browser configuration.
D. Quantification of Information Leakage
In order to quantify identifiability of a user’s browser configuration, Eckersley [12] uses the notion of self-information or surprisal from information theory. If the probability of a browser feature \(f\) to have a value \(v\) is \(P(f = v)\), then the self-information is
\[ I(f = v) = -\log_2 P(f = v) \]
Eckersley argues that “surprisal can be thought of as an amount of information about the identity of the object that is being fingerprinted”. Consider the fingerprinting program from Figure 1. Assume for simplicity that the fonts cannot be checked. Then the only two possible outputs are \(x = 0\) and \(x = 1\). How much information is contained in \(x\) in each case? To demonstrate self-information and discuss standard definitions for quantitative information flow (QIF), we assume that the probability of a browser name being “FireFox” is 0.21.
Self-information gives a precise answer to this question: the fact that \(x = 1\) (respectively, \(x = 0\)) means that a browser name is “FireFox” (respectively, browser name is not “FireFox”):
\[ I(x=1) = I(name="FireFox") = -\log_2 0.21 = 2.25 \text{ bits} \]
\[ I(x=0) = I(name\neq "FireFox") = -\log_2 0.79 = 0.34 \text{ bits} \]
This example demonstrates that the actual amount of information that the tracker learns from observing the output of a program execution can differ a lot from one execution to another.
The standard definitions of QIF (such as Shannon entropy, min-entropy, guessing entropy etc.) compute an average amount of information leakage for all possible
program outputs. For example, Shannon entropy based definition computes:
\[ H(\text{name}) - H(\text{name}|x) = 0.74 \text{ bits} \]
Entropy-based definitions characterise the information flow of the program, hence providing the same (average) quantification of a program leakage for all browser users. As a consequence, even when entropy-based definitions predicts a relatively small leakage, a browser configuration of a particular user can be leaked completely to the tracker. In other words, the entropy-based approach may fail to ensure the desired privacy guarantees of a single user. We hence use self-information to quantify a leakage caused by a program execution for a concrete user.
E. Motivation for Hybrid Monitoring
Static analyses have been broadly used for QIF analysis \([3, 6, 13, 21]\). These static analyses approximate the leakage based on entropy-based measurements. The main challenge for applying a purely static analysis approach to fingerprinting will be to perform a precise, whole-program analysis of JavaScript programs. On the other hand, dynamic analyses have been successfully implemented in the web browsers \([11, 12, 13]\) in order to enforce non-interference of JavaScript programs. However, purely dynamic techniques that analyse one program execution were shown to be either not precise or unsound \([25]\) for information flow analysis.
We hence propose to investigate a hybrid approach to monitoring of QIF in which a dynamic analysis takes advantage of static analysis techniques. The static analyses will be able to retain precision because they exploit information from the execution and are applied locally. Furthermore the monitoring has strong formal guarantees thanks to the static analysis component that only analyses non-executed branches.
F. Contributions
- We propose a novel approach to hybrid information flow monitoring based on tagging variables with the knowledge about secrets rather than with security levels.
- We define a generic hybrid monitor, parametrized by a static analysis and give generic formal results on relation between soundness and precision.
- We identify a soundness requirement on the static analysis which is sufficient to prove soundness of a generic hybrid monitor;
- The genericity of the framework greatly facilitates the formal comparison of the precision of hybrid monitors.
- We instantiate a generic hybrid monitor with a combination of static dependency analysis and constant propagation, and derive three other monitors by weakening the static analyses (including monitors similar in spirit to those of Le Guernic et al. \([19, 20]\)). We then prove that our hybrid monitor is more precise than three other monitors and establish a hierarchy of hybrid monitors, ordered by precision.
The paper is organised as follows. Section II defines the syntax and semantics of a simple programming language, designed for studying fingerprinting of browser features. Section III reviews basic definitions from quantitative information flow and derives a symbolic representation of knowledge. Section IV presents the generic hybrid monitor and Section V proves its correctness, relative to the correctness of the involved static analyses. Section VI defines a precise hybrid monitor based on constant propagation and dependency analysis and Section VII explains how other, simpler monitors can be obtained as instances of the generic monitor. Section VIII compares with related work and Section IX concludes. The correctness of the framework has been proved using the proof assistant Coq. The Coq model and the machine-checked proof of correctness can be found on an accompanying webpage \([27]\).
II. Language
We develop the monitor for a small, imperative programming language modified slightly to focus on fingerprinting of browser features. We assume an identified subset \(\text{Feat}\) of program variables that represents the browser features. Feature variables can be read but not assigned. We restrict the conditionals in if statements to be comparisons of features with variables and values, as these are the only tests that are relevant for the fingerprinting analysis. We will use the following notations
- \(\text{Var}\) is a set of all program variables;
- \(\text{Feat} \subseteq \text{Var}\) is a set of variables that represent the browser features, ranged over by \(f\);
- \(\text{Val}\) is a set of values, including Boolean, integers and string values,
- \(x \in \text{Var}\setminus\text{Feat}\) ranges over program variables that are not features;
- \(n\) is a constant: \(n \in \text{Val}\) and;
- \(\oplus\) is an arbitrary binary operator.
A program \(P\) is a command \(S\) followed by the output of a variable. Note that outputting a list of variables can be emulated by concatenating them using a special operator. The language’s syntax is defined in Figure 3.
The semantics is defined in Figure 4 as a big-step evaluation relation \((S, \rho) \Downarrow C \rho'\). This relation evaluates a command \(S\) to be executed in an environment \(\rho : \text{Var}\setminus\text{Feat} \mapsto \text{Val}\). The semantics is parametrized by a configuration \(C : \text{Feat} \mapsto \text{Val}\) which remains unmodified during the evaluation. By Config we denote a set of all possible configurations.
III. Knowledge representation and quantitative leakage
A. Concrete domain of configurations
We will take as starting point the definitions of quantitative information flow from Köpf and Basin [11] and Smith [29]. Since our programs are deterministic, every program \( S; \text{output} \ o \) determines a partial function from a configuration \( C \) to an output \( v \). In our notation, it means that the program has run under the configuration \( (S, \rho_0) \downarrow_C \rho \) and produced an output \( v: \rho(o) = v \). Following [11], [29], a program \( S; \text{output} \ o \) partitions \( \text{Config} \) according to the final value of \( o \).
Definition 1 (Equivalence Class). Given a program \( S \), a configuration \( C \), an initial environment \( \rho_0 \) and an output variable \( o \), an equivalence class is defined as
\[
\text{Eq}(S, C, \rho_0, o) = \left\{ C' \mid (S, \rho_0) \downarrow_C \rho \land (S, \rho_0) \downarrow_C \rho' \Rightarrow \rho(o) = \rho'(o) \right\}.
\]
Once the program \( S \) executes on the configuration \( C \), a tracker can observe that the actual configuration of the user’s browser is one of \( \text{Eq}(S, C, \rho_0, o) \). How much does this equivalence class tell a tracker? If \( \text{Eq}(S, C, \rho_0, o) = \{ C \} \), the tracker has not learned anything about the actual configuration, hence no information flow has occurred. At the other extreme, if \( \text{Eq}(S, C, \rho_0, o) = \{ C \} \), by observing the output \( o \), a tracker uniquely identifies \( C \), which means total leakage of configuration \( C \). All the other cases represent partial leakage.
Consider again the program from Figure 1. For the sake of simplicity, we assume that name and fonts are the only two browser properties we are interested in. Let the user’s browser be “Opera”. In this case \( x = 0 \), and the tracker is not able to conclude exactly the name of the user’s browser. This partial leakage is precisely captured by the equivalence class of configurations with the name being different from “FireFox”.
Following the definition of self-information (see Section 11), we define a leakage function \( \text{Leak}: \mathcal{P}(\text{Config}) \rightarrow \mathbb{R}^+ \) for a set of configurations assuming that the probability of every configuration \( P(C) \) is known (for example, from Panopticlick [11]).
Definition 2. The leakage of a set of configurations \( A \subseteq \text{Config} \) is defined as follows:
\[
\text{Leak}(A) = -\log_2 \sum_{C \in A} P(C).
\]
The \( \text{Leak} \) function has the following properties:
- \( \text{Leak}(\text{Config}) = 0 \), which corresponds to the case of non-interference.
- For any couple of sets of configurations \( A_1 \) and \( A_2 \):
\( \text{If } A_1 \subseteq A_2 \text{ then } \text{Leak}(A_1) \geq \text{Leak}(A_2) \)
B. Symbolic representation of sets of configurations
We define an abstract domain of configurations, intended to represent a set of configurations, as follows:
\[
\text{Config}^2 \ni cg ::= tt \mid ff \mid f \times n \mid f \times n ; \mid cg \land cg \mid cg \lor cg
\]
where \( f \times n \) stands for \( f \neq n \) if “\( \times \)” is “\( = \)” and \( f = n \) otherwise.
We write \( \mathcal{M}(cg) \) for the models of the Boolean formula \( cg \) i.e., the set of configurations that satisfy the Boolean formula \( cg \).
During our analysis we will compute a Boolean formula \( cg \) for every output of the program. For example, for the output \( x = 2 \) of the program from Figure 1, the resulting formula will be
\[
(name \neq "\text{FireFox}") \land (\text{fonts} = \text{fontsSet1})
\]
Similar to the leakage function for a set of configurations, we define a leakage function \( \text{Leak}^2 : \text{Config}^2 \rightarrow \mathbb{R}^+ \) for Boolean formulas representing sets of configurations.
**Definition 3.** The leakage of a Boolean formula \( cg \in \text{Config}^2 \) is defined as follows:
\[
\text{Leak}^2(cg) = -\log_2 \sum_{C \in \mathcal{M}(cg)} P(C).
\]
The \( \text{Leak}^2 \) function has the following properties:
- \( \text{Leak}^2(tt) = 0 \), which corresponds to the case of non-interference.
- For all \( a_1, a_2 \in \text{Config}^2 \):
If \( a_1 \Rightarrow a_2 \) then \( \text{Leak}^2(a_1) \geq \text{Leak}^2(a_2) \).
The last property of a \( \text{Leak}^2 \) function is particularly important for our quantitative information flow monitors. As a result, our hybrid monitor will strive to weaken formulas: a weaker formula means that the computed leakage is smaller.
**IV. Generic Model of Hybrid Monitors**
In qualitative (“high-low”) information flow control, Le Guernic et al. [20] have shown that a dynamic information flow analysis can be improved by a static analysis of conditional branches that are not being taken. In this section, we generalise those results for quantitative information flow and define a generic hybrid monitor combining static and dynamic analysis. Moreover, we present a static analysis able to dynamically prove the non-interference of programs that were previously out of reach of existing hybrid monitors.
**A. Formal Definitions**
The monitor will be defined as an operational semantics, parametrized by the configuration \( C \)
\[
(S, (\rho, K)) \Downarrow_C (\rho', K')
\]
with a monitoring mechanism for tracking the information flow from the browser features to the output of the program. The new semantic state \((\rho, K)\) has the following components:
- \( \rho : \text{Var} \rightarrow \text{Val} \) is the environment for program variables.
- \( K : \text{Var} \rightarrow \text{Config}^2 \) is an environment of knowledge about features stored in the non-feature variables.
The knowledge is represented by a formula from the abstract domain \( \text{Config}^2 \) (see Section [III]). In traditional information flow analysis, variables are tagged with security levels, while our analysis is based on the knowledge environment \( K \), that represents the knowledge in every program variable. This knowledge can either flow directly into the variable through an assignment or indirectly by updating the variable inside a conditional that depends on some feature value. Such a knowledge environment is thus a generalisation of a simple dependency function between variables, in that it contains additional information about the values of browser features. For example, a knowledge environment \( K \) may contain the following knowledge about the configuration: \( K(x) = (\text{name} = \text{"FireFox"}) \land (\text{fonts} = \text{FontSet1}) \). The initial knowledge environment \( K_0 \) is defined by \( \forall x. K_0(x) = tt \), which means that no variable contains any knowledge about the browser configuration.
The monitor relies on the auxiliary function \( \kappa \) (defined in Figure [3]) that approximates the information obtained from the evaluation of an expression. Evaluating a feature variable \( f \) will give access to its value and will therefore transmit the information \( f = C(f) \) where \( C \) is the configuration of the browser. Accessing a non-feature variable provides the knowledge present in that variable as defined by the knowledge environment \( K(x) \). An approximation of knowledge in an arithmetic expression \( e_1 \oplus e_2 \) is defined as a combination of knowledge in \( e_1 \) and \( e_2 \).
The evaluation relation \( \Downarrow_C \) defining the big-step semantics for the generic hybrid monitor parametrized by a configuration \( C \) is presented in Figure [3]. The rules [skip], [seq], [if-then], [if-else] and [while-loop] correspond to the rules from the standard semantics and are straightforward. The rule [assign] updates the value environment with the new value of \( x \). Notice that in traditional dynamic and hybrid information flow analysis [28], variable \( x \) would be assigned a “high” security level in case it is assigned within the “high” security context. In our setting, this would mean that the knowledge in variable \( x \) should be updated with the knowledge from the security context. We do not keep track of the security context, and, as we show in Section [IV], our monitors are sound and even more precise than the monitors that keep track of the security context.
The rule [if-then] deals with the implicit flow of information due to conditionals. Assuming that the Boolean expression \( B \) evaluates to true, the semantics evaluates \( S_1 \) and statically analyse the non-executed branch \( S_2 \). The new monitor state \( \llbracket B, K, s', s'' \rrbracket_{\rho} \) approximates the knowledge obtained from both branches. We explain this combination of states in Figure [3] immediately after the presentation of the static analysis.
**B. The Role of the Static Analysis**
The hybrid monitor is generic because it is parametrized on a static analysis providing information about the branches that are not being executed. The precision of the hybrid monitor can be improved if we know that the value of a variable, say \( x \), after the non-executed branch is identical to the value of \( x \) after the executed branch. The static analysis computes an
The analysis starts from a concrete environment $\rho$ of values from the execution and computes an abstract state $s^\rho = (\rho^\rho, D)$, where $\rho^\rho : \text{Var} \to \text{Val} \cup \{\top\}$ is an abstract environment and $D : \text{Var} \to \mathcal{P}(\text{Var})$ is the dependency information for each variable. The role of an abstract environment $\rho^\rho$ is to detect variables whose values are identical on both branches.
The results of the static analysis are used in the $\text{IFTHEN}$ rule using the state combination defined in Figure 4. The auxiliary function $\delta$ is used for approximating the information coming from conditionals. The equations defining $\delta$ state that the comparison $f \bowtie n$ of a feature variable and a value will provide exactly that information. Comparing a non-feature variable $x$ with a constant will at most provide the information about feature variables that were present in $x$. Finally, the comparison of a feature and a non-feature variable $f \bowtie x$ will at most transmit the information present in $x$ and the information that $f$ is equal to the current value of $x$, defined in the environment $\rho$.
The new environment $\rho'$ is taken from the result of the executed branch and the new knowledge environment $K''$ is updated as follows.
If the values of a variable $x$ are not the same after the execution of both branches, then $x$ definitely obtains a complete knowledge about the conditional $B$. We represent this as a conjunction of the knowledge in $x$ ($K'(x)$) with the knowledge in $B$ ($\delta(B)^K_P$).
If the values of a variable $x$ are the same after the execution of both branches, then the variable $x$ does not contain a complete knowledge about the conditional test $B$. Instead, from the attacker point of view, the new knowledge in $x$ can be obtained either from the executed branch or the non-executed branch. The formula we obtain can be understood as an abstraction of the standard weakest precondition of a conditional statement:
$$wp(\text{if } B \text{ then } S_1 \text{ else } S_2) = \bigwedge \left( \begin{array}{c} \neg B \Rightarrow wp(S_2) \\ B \Rightarrow wp(S_1) \end{array} \right)$$
Here, the knowledge in $x$ flowing from the non-executed branch is obtained from the knowledge of the variables used to compute $x$ ($\bigwedge_{y \in D(x)} K(y)$) and the knowledge in $x$ flowing from the executed branch is obtained by the monitoring mechanism. Notice that $\delta(B)$ is not exactly the negation of $\delta(B)$ but an abstraction. It is because $\delta(B)$ is by construction an over-approximation of the knowledge of $B$.
Consider the following program:
1. $x := 1; y := 0$
2. $\text{if } (f = 0) \text{ then } y := 1$
3. $\text{else skip}$
4. $\text{if } (g = 0) \text{ then skip}$
5. $\text{else } x := y$
6. $\text{output } x$
Here, $f$ and $g$ are feature variables that are equal to zero in the current configuration. Before the execution of the test $g = 0$, the variable $y$ already contains some knowledge: $K(y) = (f = 0)$. Let’s assume that a static analysis tracks the values and is able to detect that $x$ depends on $y$. The resulting state of this static analysis after evaluating $x := y$ is $\rho^x(x) = 1, D(x) = \{y\}$. The resulting state after the execution of the $\text{skip}$ branch would remain unchanged. Now, since the value of $x$ would be the same and equal to 1 after the execution of either of the branches, the tracker would conclude that either $g = 0$ or $f = 0$. Our combination of states computes exactly this knowledge: $\delta(g = 0)^K_P \lor K(y) \land K'(x) = (g = 0) \lor (f = 0)$.
Notice that there is no static analysis involved in purely dynamic monitoring, and still we can model it as a special case of our hybrid monitor. The abstract environment can be seen as $\forall x. \rho^x(x) = \top$, and hence we obtain a simple dynamic monitor that does not reason about non-executed branches, but instead pessimistically decides...
\{B, K, (\rho', K'), (\rho^x, D)\}_\rho = (\rho', K'')$, where
$$K''(x) = \begin{cases} \left( -\delta(B)_\rho^K \Rightarrow \bigwedge_{y \in D(x)} K(y) \right) \land \left( -\bar{\delta}(B)_\rho^K \Rightarrow K'(x) \right) & \text{if } \rho^x(x) = \rho'(x) \\ \delta(B)_\rho^K \land K'(x) & \text{otherwise} \end{cases}$$
\[
\begin{aligned}
\delta(f \times n)_\rho^K &= f \times n \\
\bar{\delta}(f \times n)_\rho^K &= f \bar{x}n
\end{aligned}
\]
Figure 5: State combination for the [ifThen] from Figure 4.
that all the variables will contain knowledge from the tests of the if-statements. This new knowledge in \(x\) will then contain the knowledge from the executed branch and from the test \(B\): \(\delta(B)_\rho^K \land K'(x)\).
V. GENERIC SOUNDEDNESS AND PRECISION THEOREMS
In this section, we establish the soundness and precision theorems that hold for the generic model of hybrid monitors presented in Section IV. Comprehensive proofs can be found in the companion Coq development [27].
A. Monitor Soundness and Precision
A concrete hybrid monitor is obtained by instantiating the generic model by a given static analysis, say \(A\). In the following, we write \(\downarrow^A\) for an hybrid monitor that uses the static analysis \(A\). A monitor \(\downarrow^A\) is sound if after monitoring a statement \(S\) it over-approximates the knowledge about features contained in the output variable \(x\). The formal statement of this property is given in Definition 4.
Definition 4 (Monitor soundness). A hybrid monitor \(\downarrow^A\) is sound if starting from an initial configuration \(C\) and the initial environment \((\rho, ax.tt)\), it monitors a statement \(S\) and reaches a final configuration \((\rho', K)\)
\[(S, (\rho, ax.tt)) \downarrow^A (\rho', K)\]
such that for all variable \(x\), \(K(x)\) under-approximates the set of undistinguishable configurations
\(\mathcal{M}(K(x)) \subseteq Eq(S, C, \rho, x)\).
Notice that while \(K(x)\) under-approximates a set of configurations, it over-approximates the knowledge of the attacker. Assume an attacker has the knowledge \(K(x)\), that models a subset of actually undistinguishable configurations \(Eq(S, C, \rho, x)\). Then she can more easily distinguish between the possible configurations, thus her knowledge is over-approximated.
The most precise monitor would compute \(Eq(S, C, \rho, x)\) that is exactly the set of configurations indistinguishable from \(C\) by observing the value of \(x\). In general, the closer the set \(\mathcal{M}(K(x))\) is to \(Eq(S, C, \rho, x)\), the more precise is the monitor.
Definition 5 (Monitor precision). A hybrid monitor \(\downarrow^A\) is more precise than a hybrid monitor \(\downarrow^B\) if for every statement \(S\) and initial configuration \(C\), the monitor \(\downarrow^A\) always computes a bigger set of configurations corresponding to the knowledge stored in output variable \(x\). Formally,
\[
\begin{cases}
(S, (\rho, K_0)) \downarrow^A (\rho_A, K_A) \\
(S, (\rho, K_0)) \downarrow^B (\rho_B, K_B)
\end{cases} \Rightarrow \mathcal{M}(K_B(x)) \subseteq \mathcal{M}(K_A(x)).
\]
This is coherent with the definition of leakage in Section III because the leakage function is anti-monotonic in the set of configurations. Thus, a more precise monitor would estimate a smaller leakage, i.e., a larger set of configurations:
\(Leak^A(K_A(x)) \leq Leak^B(K_B(x))\).
In Section VI we will define a static analysis that will induce a sound monitor that is more precise than any other monitor we propose. This has the consequence that we can prove soundness of other monitors by proving that they are less precise than our hybrid monitor. This result is particularly useful when monitors are obtained by weakening the static analysis they employ, as is done when defining the hierarchy of monitors in Section VI.
B. SOUNDNESS REQUIREMENTS FOR A STATIC ANALYSIS
The generic hybrid monitor has a generic soundness proof relying only on a requirement for the static analysis. As explained in Section IV, the role of the static analysis is to extract executions within the non-executed branch that are indistinguishable from the executed branch and estimate the knowledge that is carried by the variables. Definition 6 provides the formal specification for static analyses that are compliant with our generic hybrid monitor.
Definition 6 (Sound Static Analysis). A static analysis \(\downarrow^S\) is sound (for our hybrid dynamic monitor) if the
following implication holds:
\[(S, \rho) \downarrow C \rho' \Downarrow (\rho^A, D) \]
\[\rho^A(x) = v \]
\[\forall y, y \in D(x) \Rightarrow \rho(y) = \rho_0(y) \]
**Theorem 1 (Soundness).** Suppose a sound static analysis \( \downarrow^A \) according to Definition \ref{def:static-analysis}. Then the hybrid monitor \( \downarrow^H \) is sound according to Definition \ref{def:hybrid-monitor} and therefore safely approximates information leakage.
The proof of this theorem is part of the Coq development \[27\].
**C. Precision Requirements for a Static Analysis**
The relative precision of different monitoring mechanisms is often difficult to establish, at least formally. In our generic hybrid monitor the precision of the monitor is directly linked to the strength of the static analysis: a better static analysis yields a more precise monitor.
**Definition 7 (More Precise Analysis).** An analysis \( A \) is more precise than an analysis \( B \) if for any result of the static analysis \( B \), there exists a more precise result output by analysis \( A \), i.e., the abstract environment is more defined and the set of variables computed is smaller.
\[(S, \rho) \downarrow^A B, D_B \wedge (S, \rho) \downarrow^A (\rho_A, D_A) \Rightarrow \]
\[\forall x, v. \rho_B(x) = v \Rightarrow \rho_A(x) = v \]
\[\forall x. D_A(x) \subseteq D_B(x) \]
Using the previous definition of precision, we are able to state the following generic theorem.
**Theorem 2 (Relative Precision).** If a static analysis \( A \) is more precise than a static analysis \( B \) (according to Definition \ref{def:precision}) then the hybrid monitor \( \downarrow^A \) is more precise than the hybrid monitor \( \downarrow^B \) (according to Definition \ref{def:hybrid-monitor}).
The proof is by induction over the definition of the monitor semantics \( \downarrow \) and follows from the fact that all the rules are monotonic with respect to ordering of the knowledge \( K \). This is especially the case for the \texttt{ifThen} rule. If a stronger analysis computes less spurious dependencies and therefore a weaker formula. Remember that weaker is better and that non-interference corresponds to computing the formula \( tt \). The full proof is part of the Coq development \[27\].
This theorem is the key for comparing the different existing and novel hybrid monitors presented in Section \[VI\] and Section \[VII\].
**D. Where are the Security Contexts?**
Security contexts are a traditional ingredient of static and dynamic information flow mechanisms. Perhaps surprisingly, our generic hybrid monitor is sound even in the absence of security context and ignoring the security context leads to a more precise monitor. Our generic hybrid monitor could incorporate a security context \( \sigma \) by rewriting the \texttt{[assign]} rule and the \texttt{[ifThen]} as
\[D' = D[x \mapsto \kappa(E)^D \wedge \sigma]
\]
\[\exists (x := E, (\rho, D, \sigma) \downarrow^C (\rho[x \mapsto [E]^C, D', \sigma]) \]
One explanation for this apparent paradox is that our \texttt{[ifThen]} incorporates the knowledge of the current condition and therefore includes the security context on a “lazy” basis.
**Theorem 3 (Security Context).** For a given (sound) static analysis, a monitor not using security contexts is always sound and more precise than a monitor using security contexts.
This result is a direct consequence of Theorem \ref{thm:precision} and the fact that the assignment rule with security context computes a stronger formula. The proof is also part of the Coq development \[27\].
For a big-step semantics, this reasoning is very natural but we believe the same precision can be achieved for a small-step operational semantics at the cost of some bureaucracy e.g., an explicit stack of security contexts. What is crucial for precision is to never incorporate the knowledge of conditions during the \texttt{[assign]} rule. If programs were allowed to output values at any time, even our big-step semantics would require a security context. A simple approach would then consist in incorporating the knowledge of the security context only when a value is output.
It is also worth noting that ignoring the security context does not improve the purely dynamic monitor: the security context will eventually be included. However, the improvement is visible for hybrid monitors and allows to prove the absence of information flow in programs like \texttt{if C then x:=1 else x:=1; output x}.
**VI. A Hybrid Monitor with Constant Propagation and Dependency Analysis**
In this section we define a hybrid monitor that employs a static analysis which can take full advantage of the concrete values available to the dynamic part of the hybrid monitor. Our static analysis is a combination of constant propagation and dependency analysis. As explained in Section \[VII\], the hybrid monitor can take advantage of the fact that a variable has the same value on both branches of a conditional to make a more accurate estimation of the knowledge about features contained in that variable.
\[
\begin{align*}
\text{[ASkip]} & \quad (\text{skip, } s) \Downarrow^\sharp s \\
\text{[ASeq]} & \quad (S_1, s_1) \Downarrow^\sharp s_2 \quad (S_2, s_2) \Downarrow^\sharp s_3 \quad \frac{S_1; S_2, s_1}{} \Downarrow^\sharp s_3 \\
\text{[AIFFComb]} & \quad [B]_\rho^D = tt \quad \frac{(S_1, s) \Downarrow^\sharp s_1 \quad (S_2, s) \Downarrow^\sharp s_2}{(\text{if } B \text{ then } S_1 \text{ else } S_2, s) \Downarrow^\sharp s_1 \sqcup s_2} \\
\text{[AIFFElse]} & \quad [B]_\rho^D = ff \quad \frac{(\text{if } \neg B \text{ then } S_2 \text{ else } S_1, s) \Downarrow^\sharp s'}{(\text{if } B \text{ then } S_1 \text{ else } S_2, s) \Downarrow^\sharp s'} \\
\text{[AIfTop]} & \quad (S_1, s) \Downarrow^\sharp s_1 \quad (S_2, s) \Downarrow^\sharp s_2 \quad \frac{\text{[B]}}{\Downarrow^\sharp s_1 \sqcup s_2} \\
\text{[AWhile]} & \quad (S, s') \Downarrow^\sharp s_1 \quad s_1 \sqsubseteq s' \quad s \sqsubseteq s' \\
\end{align*}
\]
Figure 6: Constant propagation and dependency analysis.
An abstract state \((\rho^D, D) \in \text{State}^D\) is a pair of:
- an abstract environment \(\rho^D : \text{Var} \rightarrow \text{Val} \sqcup \{\top\}\) and \(\top\) represents an arbitrary value,
- a dependency function \(D : \text{Var} \rightarrow \mathcal{P}(\text{Var})\) such that the computation of \(x\) depends upon a set of variables \(D(x)\).
Abstract states are equipped with a partial order \(\sqsubseteq\) obtained as the Cartesian product of the ordering of abstract values: \(\forall x, y, x \sqsubseteq y\) iff \(x = y\) \(\lor\) \(y = \top\) and the point-wise lifting of the standard set inclusion \(\mathcal{P}(\text{Var})\). The join operator \(\sqcup\) is the least upper bound induced by the ordering \(\sqsubseteq\).
The static analysis is specified in Figure 6 as a syntax-directed set of inference rules that generate constraints over abstract states. The static analysis of a program \(S\) is defined as a function between abstract states, written \((S, s) \Downarrow^\sharp s'\), such that \(s'\) is the least abstract state solution to the constraints. The intended meaning is that \(s'\) is a valid abstraction of the result obtained when running program \(S\) in an initial state that is modelled by an abstract state \(s\).
The [AIFFComb] rule combines the states after the analysis of two branches in case the test of the if-statement can be evaluated in the given abstract environment. The state combination \([B, D, s_1, s_2]_\rho^D\) from Figure 6 is the abstraction of the combination of the states from executed and non-executed branches that we defined in Figure 6. In disjunctive normal form, the logical formula for obtaining \(K''\) is of the form
\[
(\delta(B)_\rho^K \land K'(x)) \lor \left( \bigwedge_{y \in D(x)} K(y) \land K'(x) \right) \lor \ldots \quad (1)
\]
In the rest of this section, we explain why the set \(D''\) (see Figure 6A) represents an under-approximation of this formula. Note that the static analysis can safely ignore the other terms of the formula, here represented by \(\ldots\). If the values of \(x\) are possibly different after both branches, we combine the knowledge possibly obtained by reading \(x\) in the executed branch and in the test \(B\). If the values of \(x\) are the same after both branches, then \(x\) gets the knowledge either from the executed branch and the test \(B\) or just from both branches.
The state combination for static analysis in Figure 6 uses auxiliary sets of variables: \(D_{true}(x)\) and \(D_{both}(x)\). The set \(D_{true}(x)\) is the set of variables in the test \(B\) and the set of variables, on which \(x\) depends after the potential execution of the true branch. This set represents the same idea that was used in the state combination of hybrid monitor: it corresponds to the knowledge in the formula \((\delta(B)_\rho^K \land K'(x))\). The set \(D_{both}(x)\) computes a set of variables on which computation of \(x\) depends in both branches. This set corresponds to the knowledge in the formula \(\bigwedge_{y \in D(x)} K(y) \land K'(x)\).
Now, when we construct a new dependency set \(D''(x)\), in case the values of \(x\) are different the \(D_{true}(x)\) set is taken. This case is a straightforward translation of the same condition in Figure 6. In case the values of \(x\) are different, we would like to approximate the knowledge we computed in formula (ii) by the set of variables. Since
\{B, D, (\rho_t, D_t), (\rho_f, D_f)\}_{\rho}^2 = (\rho_t, D')$, where
\[
D'(x) = \begin{cases}
D_{true}(x) \bigvee D_{bath}(x) & \text{if } \rho_t(x) = \rho_f(x) \\
D_{true}(x) & \text{otherwise}
\end{cases}
\]
\[
D_{true}(x) = \forall^D(B) \cup D_t(x) \\
D_{bath}(x) = D_f(x) \cup D_t(x)
\]
\[
X \forall X' = \begin{cases}
X & X \subseteq X' \\
X' & \text{otherwise}
\end{cases}
\]
Figure 7: State combination for the [AIfComb] rule of the static analysis from Figure 8.
The resulting set of variables \(D(x)\) will later be used by a hybrid monitor to compute a conjunction \(\bigwedge_{y \in D(x)} K(y)\), we cannot approximate the disjunction with the set of variables. Hence, we propose to choose one set, either \(D_{true}\) or \(D_{bath}\), which is more precise than the other.
Notice, that if \(X \subseteq X'\), then the formula for set \(X\) that is computed by hybrid monitor, is weaker than the formula for set \(X'\), because it is a conjunction of a smaller set of variables. Hence, the leakage computed from \(X\) is smaller than the leakage computed form \(X'\).
We prove the soundness requirement for the static analysis presented in Figure 8.
**Theorem 4 (Static Analysis Soundness).** The static analysis \(\dagger\) is sound according to Definition 2.
The proof of this theorem is part of the Coq development [27].
Consider an example of a non-interfering program 4 from Table 8 when \(A\) is true, \(x = 1\) and otherwise \(x = 1\) because \(y = 1\). The knowledge about the value of \(C\) is contained in \(z\), however it does not influence the value of \(x\) because there is no execution where \(x\) would be assigned to \(z\). To explain the static analysis, let’s consider the case when \(A\) is true. The static analysis starts from the branch if \((y = 1)\) then \(x := y\); else \(x := z\) and since the test \(y = 1\) can be evaluated, the rule [AIfComb] is applied. The resulting states from the branches \(x := y\) and \(x := z\) are combined according to the static analysis state combination, where the auxiliary sets are: \(D_{true}(x) = \{y\}\) and \(D_{bath}(x) = \{y, z\}\). Here, the value of \(z\) does not influence the decision of the new dependency set because \(D_{true}(x) \subset D_{bath}(x)\). Hence, \(D'(x) = D_{true}(x) = \{y\}\).
Then, the hybrid monitor combines the results of the static analysis and of the executed branch in the [IFTHEN] rule from Figure 8. In this rule, \(D(x) = \{y\}\). Since our monitor does not track the security context, the knowledge in \(x\) after the execution of the branch \(\text{skip}\) is \(K'(x) = tt\) and \(y\) does not contain any knowledge: \(K(y) = tt\). Therefore, \(K'(x) = (\delta(B)_{\rho}^K \lor K(y)) \land (\delta(B)_{\rho}^K \lor K'(x)) = tt\). Formula \(tt\) corresponds to no knowledge, and the leakage of this program is 0 bits.
This example clearly shows that our hybrid monitor will recognise the non-interference of this program, however other dynamic and hybrid information flow techniques would mark \(x\) with “high” security label, since it has been assigned under the security context of a secret condition \(A\).
**VII. A HIERARCHY OF HYBRID MONITORS**
Next, we examine three variants of the monitor from the previous section, obtained by modifying the constant propagation and dependency analyses. These modifications are defined by replacing the rules for assignment and conditionals in the definition of the static analyses (Figure 9). We shall name each monitor by HM\((X+Y)\) where \(X\) is the name of the rule for assignment and \(Y\) is the rule for conditionals used. The systematic way in which these monitors are derived makes it easy to organise them into a hierarchy of relative precision, depicted in Figure 8. All the precision theorems in this section are a direct consequence of Theorem 4 stating that more precise static analysis induces more precise hybrid monitoring. The proofs of the theorems have been left out for lack of space—see [27].
Table 8 presents examples of 2 programs that leaks some information about the secrets and 2 non-interfering programs. To simplify the examples, the secrets \(A, B\) and \(C\) denote the tests on the browser features that were represented by \(f \bowtie n\) in the original syntax. Notice that program 1 represents the original example
of fingerprinting code from Figure 8. We only substitute a test (name = "FireFox") with A and a test (fonts = fontsSet1) with B.
These programs illustrate the difference in precision of hybrid monitors. For every monitor and program we specify a formula that represents the knowledge in x in the end of the execution (when A, B, and C are true) and a corresponding amount of leakage in bits computed from the obtained formula.
To provide the estimation of leakage we assume the corresponding probabilities for A, B, and C to be true: \(P(A) = 0.21\) (test on the “FireFox” browser name), \(P(B) = 0.01\) (test on a concrete list of fonts), \(P(C) = 0.14\) (test on a time zone). We also assume that the browser features represented by A, B, and C are independent. We then compute probabilities for events in its usual sense, for example
\[
P(A \land B) = P(A) \cdot P(B) = 0.0021
\]
\[
P(A \lor B \lor C) = 1 - P(\neg A)P(\neg B)P(\neg C) = 0.327
\]
Then the leakage computed as a self-information (a logarithm of an event), for example the leakage of information in formula \(A \land B\) is \(-\log_2 P(A \land B) = -\log_2 0.0021 = 8.89\) bits, while in formula \(A \lor B \lor C\) it is \(-\log_2 P(A \lor B \lor C) = 1.61\) bits.
Notice that Table I also presents examples of programs 3 and 4 that are non-interferent. These programs illustrate the difference in precision of hybrid monitors. When a monitor computes that the output variable x contains 0 bits of information, the monitor recognises the program as non-interferent. Hence this monitor is more precise than other monitors that compute some leakage different than 0 bits.
A. The HM(Val+Simp) monitor
The precise treatment of conditionals in the static analysis from Figure 7 attempts to determine the actual value of the Boolean conditional by the constant propagation analysis. A simpler analysis would abandon this idea and just assume that both branches might be executed. Instead of [AIfTop], [AIfComb] and [AIfElse] rules, this analysis uses one simple rule for if-statements:
\[
[AIfSimple]\quad (S_1, s) \uparrow s_1 \quad (S_2, s) \uparrow s_2
\]
To illustrate the difference in precision, consider a program 4 from Table I. This program is non-interferent and our HM(Val+Comb) monitor correctly computes 0 bits of leakage (see section Section V for more details).
We consider the case when A and C are true. Then, \(z\) is updated to 1 and it contains knowledge \(K(z) = C\). The static analysis of HM(Val+Simp) monitor ignores the Boolean conditional \((y = 1)\), and hence it computes a set of variables \(D(x) = \{y, z\}\). [IfThen] rule of the hybrid monitor computes
\[
K'(x) = (\delta(A)^K \lor (K(y) \land \bar{K}(z))) \land (\bar{\delta}(A)^K \lor K'(x)) = (A \lor (tt \land C)) \land (\neg A \lor tt) = A \lor C
\]
The amount of leakage in this case is 0.83 bits, that clearly shows that HM(Val+Sim) monitor is less precise than HM(Val+Comb) monitor that computes 0 bits of leakage for this program execution.
**B. The HM(Top+Sim) monitor**
Le Guernic et al. [20] proposed a hybrid information flow monitor that uses static analysis for non-executed branches. The idea of the analysis is to compute a set of variables modified that might be assigned to some value in the non-executed branch. Then, all the variables in modified are tagged with a “high” label in case the test of the if-statement contained some “high” (secret) variables. In a later work, Russo and Sabelfeld [28] define a generic framework of hybrid monitors where such syntactic checks are proposed as well.
To compare our monitors with the monitor of Le Guernic et al. [20], we propose a static analysis that sets the abstract value of a variable to $\top$ as soon as it gets assigned. By doing so, $\rho^I(x) = \top$ means that $x \in \text{modified}$. Concretely, we take the static analysis from the HM(Val+Sim) monitor and substitute the [AAssignTop] rule with the following rule:
\[
[A\text{AssignTop}] \quad \rho' = \rho | x \mapsto \top | (x := E, (\rho, D)) \Downarrow^E (\rho', tt)
\]
With this static analysis in mind, the idea of syntactic checks is already considered in our generic hybrid monitor. Whenever $x$ is in modified, its value will be $\top$, and hence according to the state combination procedure in Figure 6, the knowledge of the test will be added to the knowledge of $x$.
**Theorem 6.** The HM(Val+Sim) monitor is more precise than the HM(Top+Sim) monitor.
All the programs from Table 6 illustrate that the HM(Val+Sim) monitor evaluates the leakage more precisely than the HM(Top+Sim) monitor.
**C. The HM(Top+Comb) monitor**
In a later work, Le Guernic [12] proposed a more generic framework of hybrid monitors that use static analysis. One of the novelties of this work is that the static analysis should ignore the branch that will not be executed (according to the current environment) if the test before the branch does not contain any secret variables. The formalization of this principle in our approach is a static analysis that uses the [AAssignTop] rule for assignment and [AIfComb] rule for if-statements.
**Theorem 7.** The HM(Top+Comb) monitor is more precise than the HM(Top+Sim) monitor.
In his PhD thesis [12], Le Guernic has proven that if a monitor on which we base HM(Top+Sim) monitor concludes that a variable $x$ does not contain secret information, then the monitor similar to HM(Top+Comb) also concludes that variable $x$ does not contain any secret information. Our framework generalises this proof since our notion of precision is based on the amount of knowledge in a variable. Programs 1 and 3 in Table 6 illustrate this difference in precision.
**Theorem 8.** The HM(Val+Comb) monitor is more precise than the HM(Top+Comb) monitor.
To illustrate this precision result, consider the program 4 from Table 6. The static analysis used by the HM(Top+Comb) monitor marks all the assigned variables as $\top$ because of the syntactic nature of [AAssignTop] rule. Hence, $\rho^I(x) = \top$, and so $x$ will contain some knowledge about $a$.
Notice that programs 1 and 2 from Table 6 illustrate that the HM(Top+Comb) and HM(Val+Sim) monitors are incomparable in a sense of their relative precision.
**VIII. Discussion and Related work**
**A. Hybrid Information Flow Monitoring**
Hybrid monitors for information flow control that combine static and dynamic techniques have recently become popular [14], [26], [20], [28]. One of the first techniques was proposed by Le Guernic et al. [20] where the static analysis only performs syntactic checks on non-executed branches. This approach fits into our framework as HM(Top+Sim) monitor and it is proven to be less precise than the other monitors we propose. Russo and Sabelfeld [28] introduced a generic framework of hybrid monitors, where non-executed branches are also analysed only syntactically. In the follow-up work Le Guernic [14] presented a more permissive static analysis, that ignores possible branches that depend only on public variables. Inspired by this approach, we introduced HM(Top+Comb) monitor that is proven to be less precise than HM(Val+Comb) monitor.
Devries and Piessens [10] proposed a secure multi-execution (SME) technique which falls outside of the static, dynamic, or hybrid classification. The basic principle is to multi-execute the program for every security level while filtering inputs and outputs and thus enforcing non-interference. The approach was shown to be efficient in a web browser environment [1], [3], when the security lattice consists of two levels: secret and public.
In our setting, each secret variable (browser feature) has a different security level (different knowledge), and combination of variables yields a creation of a new security level. In this case the security lattice grows exponentially (with the growth of a boolean formula) and SME approach would not be an efficient solution.
B. Protection against Excessive Leakage
Our hybrid monitor computes an over-approximation of the knowledge extracted from the observation of the program output. To protect herself against excessive leakage, a user can set a threshold on the maximum number of bits she agrees to leak. The hybrid monitor would estimate the leakage and either halt the program execution, or perform other actions so that the leakage would not exceed the threshold.
One of such actions can be a suppression of program output, another possibility could be editing the output. Such enforcement actions can be modelled by edit automata that are designed to enforce desired security policies without halting the program.
In our current model, a program only outputs a single final value and we assume that non-termination is not observable. In this setting, the hybrid monitor can turn a potential excessive leakage into an absence of leakage by halting the program just before the output statement. A consequence is that our hybrid monitor can enforce termination-insensitive non-interference by suppressing any output which leakage is different from zero.
If termination is observable, halting the program might leak information. To cope with this issue, Mardziel et al. perform a worst-case static analysis which computes an over-approximation of the leakage of all executions. If the over-approximation is above the leakage threshold, the program is not executed. In our fingerprinting context, this approach would likely be very pessimistic as the program would not be executed as soon as there exists one single user for which the program can learn too much information. Future work will investigate how our hybrid monitor can estimate the information leakage due to halting the execution, and how to decrease it below the threshold (e.g., by lying about the browser configuration).
Another important property to consider is correction soundness introduced by Le Guernic. A monitor is correction sound if on two executions that agree on public inputs, the low outputs get the same security level in the end of the execution. If the monitor enforcing non-interference is not correction-sound, it introduces new information leaks due to different enforcement reaction on different secret inputs. Our hybrid monitor would not obey this property, because when the program is non-interferent, there might be some secret inputs, for which the output would contains some information according to our monitor. The question of correction soundness is worth a deeper investigation for monitors that track the knowledge flow of the program.
C. Quantitative Information Flow Analysis
There are several approaches to quantify the information learned by a public observer about the secret program inputs. Existing work based on static analysis e.g., aim at quantifying the information flow of a program and therefore rely on metrics that summarise the information flow of all the executions. Our hybrid monitoring technique aims at estimating the leakage of a single execution. As the leakage can be very different from one execution to the other, an advantage of our hybrid monitor technique is that it can potentially take more informed counter-measures based on the estimated amount of leaked information.
Clarkson, Myers, and Schneider define the belief of the observer about secret inputs as a probability distribution, and show how to refine this belief by observing concrete executions of the program. A strength of this model is that it accommodates for inaccurate beliefs. Our threat model is simpler and assumes that the initial belief of the attacker i.e., the probability distribution of browser properties, is accurate. Clarkson, Myers and Schneider also show that self-information is the adequate notion to quantify the amount of information leaked by the observation of the output of a single execution. Based on this belief tracking approach, Mardziel et al. propose an enforcement mechanism for knowledge-based policies. The knowledge of the observer is a probability distribution of secret variables, and the static analysis of the program makes a decision to run or reject the program. In case there exists a value of some secret variable that may increase the knowledge of the observer above some predefined threshold, the program is rejected. The approach is static and could reject a program whereas a specific concrete execution could actually leak very little information. Also, Mardziel et al. keep history of the knowledge gained by the observer. This knowledge is updated whenever an observer sees more program outputs. Modelling a sequence of outputs is currently out of reach of our model. We shall consider such extension and incorporation of history in the future work.
Backes et al. compute the number of equivalence classes and their sizes by statically analysing the program, and evaluate the leakage using entropy-based measurements. Like us they use a symbolic representation of equivalence classes and leakage computation also requires model enumeration. Our hybrid monitor restricts the expressiveness of the logic used to represent symbolically equivalence classes. This is done at the cost of precision. For instance, using arithmetic reasoning, it is straightforward to deduce that the expression $x - x$ does not leak any knowledge. Our hybrid monitors considers that $\kappa(x - x) = \kappa(x)$. However, the advantages are twofold: the hybrid monitor is fast which is mandatory for online monitoring; the hybrid monitor is language agnostic which allows to deal with arbitrary language operators.
Köpf and Rybalchenko [15] bound the leakage of a program by combining the over- and under-approximation of the leakage of randomised concrete executions. Lower bounds of leakage are obtained by instantiating a relational static analysis with concrete values. Upper bounds are obtained by a symbolic backward analysis of the concrete execution path. Our hybrid monitor is using a tighter combination of static and dynamic analysis. In terms of precision, the techniques are not comparable. A symbolic backward analysis of a concrete execution path would have the precision of a purely dynamic monitor that would not abstract the knowledge of expressions. Our hybrid monitor might be less precise because it abstracts the knowledge of expressions. However, it gains precision over a symbolic execution because its static analysis explores (infinitely) many paths and refines the leakage in case it can prove they produce the same output.
To the best of our knowledge, the only dynamic analysis for quantitative information flow was proposed by McCamant and Ernst [23]. It uses a channel capacity metric for information leakage. The channel capacity defines the smallest probability distribution, and hence puts an upper bound on the amount of leaked information. This approach is not precise enough in our setting since the probability distributions are known a-priori.
IX. Conclusions
Fingerprinting of browsers is a technique for tracking users on the web without storing data in their browser. By running fingerprinting scripts, web trackers can learn about specific features of the user’s browser configuration and thereby effectively identify the user (more precisely, her browser). However, the effectiveness of a script highly depends on the web browser configuration.
We propose to evaluate the amount of information a web tracker learns by observing the output of a fingerprinting script for a particular browser configuration. To quantify the leakage precisely for a specific user, i.e., for a specific browser configuration, we propose a hybrid analysis technique computing a symbolic representation of the knowledge that a script obtains about the browser configuration.
We have developed a generic framework for modeling hybrid monitors that are parametrized by static analyses. The framework most notably proposes a generic soundness requirement on the static analysis which is sufficient to prove soundness of a hybrid monitor. This generic framework can be used to prove relative precision of hybrid information flow monitors.
We have instantiated the generic monitor with a combined static constant propagation and dependency analysis. This analysis provides more precise results for non-executed branches than in previous works. Moreover, our symbolic representation of knowledge allows us to benefit from the constant propagation analysis and to model the tracker’s knowledge about a browser configuration more precisely. Concretely, our approach gains precision in those cases where a tracker will observe the same value for a given variable after the execution of either of the branches. We have proved that our monitor is more precise than the other hybrid information flow monitors found in the literature.
The entire theory has been modelled and verified [27] using the Coq proof assistant. Using Coq has been very productive to explore this rather new area on the frontier between security monitors and static analysis in a semantically correct way.
For further work, Section VIII already discussed extensions towards correction soundness, threshold-based enforcement and the security guarantee that it can provide. In addition, our hybrid analyses are defined for a simple programming language with focus on the principles behind the mechanism. We would have to scale to such languages as JavaScript for real deployment. Hedin and Sabelfeld [13] have shown it possible to analyse JavaScript using a purely dynamic information flow technique. Their system seems an ideal candidate to instrument with our monitor in order to track and quantify the information a tracker can deduce about possible configurations by observing the program outputs.
Acknowledgements: The authors are grateful to Boris Köpf, Alan Schmitt and the anonymous reviewers for valuable comments on earlier versions of this paper. We also thank Michael Hicks for fruitful discussions about this work.
REFERENCES
|
{"Source-Url": "http://www-sop.inria.fr/members/Nataliia.Bielova/papers/BESS-BIEL-JENS-13-CSF.pdf", "len_cl100k_base": 15027, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 65277, "total-output-tokens": 18278, "length": "2e13", "weborganizer": {"__label__adult": 0.0004127025604248047, "__label__art_design": 0.0004723072052001953, "__label__crime_law": 0.0016269683837890625, "__label__education_jobs": 0.0007061958312988281, "__label__entertainment": 0.00010389089584350586, "__label__fashion_beauty": 0.00017201900482177734, "__label__finance_business": 0.0003664493560791016, "__label__food_dining": 0.0003256797790527344, "__label__games": 0.0009784698486328125, "__label__hardware": 0.0011138916015625, "__label__health": 0.0005326271057128906, "__label__history": 0.0003147125244140625, "__label__home_hobbies": 0.0001099705696105957, "__label__industrial": 0.00047850608825683594, "__label__literature": 0.00046372413635253906, "__label__politics": 0.0006346702575683594, "__label__religion": 0.00037598609924316406, "__label__science_tech": 0.09283447265625, "__label__social_life": 0.00011247396469116212, "__label__software": 0.026641845703125, "__label__software_dev": 0.87060546875, "__label__sports_fitness": 0.00020122528076171875, "__label__transportation": 0.00042128562927246094, "__label__travel": 0.00015854835510253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69672, 0.01465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69672, 0.65101]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69672, 0.86381]], "google_gemma-3-12b-it_contains_pii": [[0, 4072, false], [4072, 9234, null], [9234, 14549, null], [14549, 18213, null], [18213, 23769, null], [23769, 27776, null], [27776, 32258, null], [32258, 37333, null], [37333, 41746, null], [41746, 46074, null], [46074, 48886, null], [48886, 53994, null], [53994, 59612, null], [59612, 64664, null], [64664, 69672, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4072, true], [4072, 9234, null], [9234, 14549, null], [14549, 18213, null], [18213, 23769, null], [23769, 27776, null], [27776, 32258, null], [32258, 37333, null], [37333, 41746, null], [41746, 46074, null], [46074, 48886, null], [48886, 53994, null], [53994, 59612, null], [59612, 64664, null], [64664, 69672, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69672, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69672, null]], "pdf_page_numbers": [[0, 4072, 1], [4072, 9234, 2], [9234, 14549, 3], [14549, 18213, 4], [18213, 23769, 5], [23769, 27776, 6], [27776, 32258, 7], [32258, 37333, 8], [37333, 41746, 9], [41746, 46074, 10], [46074, 48886, 11], [48886, 53994, 12], [53994, 59612, 13], [59612, 64664, 14], [64664, 69672, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69672, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
c4e7f6d6393cf9a819698a2056316667d0f90440
|
An enterprise integration system is coupled to a number of legacy data sources. The data sources each use different data formats and different access methods. The integration system includes a back-end interface configured to convert input data source information to input XML documents and to convert output XML document to output data source information. A front-end interface converts the output XML documents to output HTML forms and the input HTML forms to the XML documents. A middle tier includes a rules engine and a rules database. Design tools are used to define the conversion and the XML documents. A network couples the back-end interface, the front-end interface, the middle tier, the design tools, and the data sources. Mobile agents are configured to communicate the XML documents over the network and to process the XML documents according to the rules.
21 Claims, 10 Drawing Sheets
U.S. PATENT DOCUMENTS
6,345,259 B1 * 2/2002 Sandoval 705/7
6,401,132 B1 * 6/2002 Bellwood et al. 709/246
6,424,979 B1 * 7/2002 Livingston et al. 715/511
6,446,110 B1 * 9/2002 Lection et al. 709/203
6,519,653 B1 * 2/2003 Glass 719/317
6,678,715 B1 * 1/2004 Ando 718/105
OTHER PUBLICATIONS
* cited by examiner
public interface DataAccessService
/**
* Get a document from a data source
* @param id The id of the document. The id should at least contain the document class and unique document id. The id may also contain information specific to the back end data source such as further processing instructions or identification information.
* @return A DOM Document object containing the XML data
*/
public Document get (String id);
/**
* Update an existing document in the data source
* @param id The id of the document. The id should at least contain the document class and unique document id. The id may also contain information specific to the back end data source such as further processing instructions or identification information.
* @param doc The new document to commit to the data source.
*/
public void put (String id, Document update);
/**
* Add a new document to the data source.
* @param id A partial id for the document. The id should contain the document class. A unique document id will be generated for the document and returned by the method. The id may also contain information specific to the back end data source such as further
*/
public String add (String id, Document doc);
/**
* Delete a document.
* @param id The id of the document. The id should at least contain the document class and unique document id. The id may also contain information specific to the back end data source such as further processing instructions or identifications information.
*/
public void delete (String id);
FIG. 5
FIG. 7
1. Get Request by Agent
2. Determine Identity of Caller
3. Identify Document Type
4. Receive Group: Specific Cache for Document Type
5. Is Requested Document in Cache?
- Yes: Return Cached Document
- No: Return Document
6. Locate SQL-XML Mapping for Document Type
7. Construct Select Statement
8. Retrieve Database Connection Associated with Agent's Group
9. Execute Statement
10. Walk Result Set
11. Extract Fields
12. Build XML Document
13. Add Document to Group Specific Cache
14. Return Document
UpdateRequest by Agent
Determine Identity of Caller
Identify Document Type
Locate Update Mapping for Document Type
Construct update Statement
Retrieve Database Connection Associated with Agent's Group
Execute Statement
Did Update Succeed
Add Document to Group Specific Cache
Return
Return Error
FIG. 8
ENTERPRISE INTEGRATION SYSTEM
FIELD OF THE INVENTION
This invention relates generally to computerized applications, databases, and interface, and more particularly to integrating applications, databases, and interfaces having different formats, contexts, and designs.
BACKGROUND OF THE INVENTION
Computer and computer-related technology have enabled the use of computers in numerous enterprise functions. Almost every facet of a modern enterprise is supported by computer systems in some manner. Computerization is a necessity to allow an enterprise to remain functional and competitive in a constantly changing environment.
Computer systems are used to automate processes, to manage large quantities of information, and to provide fast and flexible communications. Many enterprises, from sole proprietorships, small stores, professional offices and partnerships, to large corporations have computerized their functions to some extent. Computers are pervasive, not only in business environment, but also in non-profit organizations, governments, and educational institutions.
Computerized enterprise functions can include billing, order-taking, scheduling, inventory control, record keeping, and the like. Such computerization can be accomplished by using computer systems that run software packages. There are many application software packages available to handle a wide range of enterprise functions, including those discussed above.
One such package is the SAP R/2™ System available from SAP America, Inc., 625 North Governor Printz Blvd., Essington, Pa. 19029. The SAP R/2 System is a software package designed to run on IBM or compatible mainframes in a CICS (Customer Interface Control System) or IMS (Information Management System) environment. For example, SAP may use CICS to interface with user terminals, printers, databases, or external communication facilities such as IBM’s Virtual Telecommunications Access Method (VTAM).
SAP is a modularized, table driven application software package that executes transactions to perform specified enterprise functions. These functions may include order processing, inventory control, and invoice validation; financial accounting, planning, and related managerial control; production planning and control; and project accounting, planning, and control. The modules that perform these functions are all fully integrated with one another.
Another enterprise area that has been computerized is manufacturing. Numerous manufacturing functions are now controlled by computer systems. Such functions can include real-time process control of discrete component manufacturing (such as in the automobile industry), and process manufacturing (such as chemical manufacturing through the use of real-time process control systems). Directives communicated from the computer systems to the manufacturing operations are commonly known as work orders. Work orders can include production orders, shipping orders, receiving orders, and the like.
However, the computerization of different functions within a single enterprise has usually followed separate evolutionary paths. This results in incompatibility between the different systems. For example, transactions from a system for one function may have a context and a format that are totally incompatible with the context and format of another function. Furthermore, as enterprises grow through mergers and acquisitions, the likelihood of inheriting incompatible systems increases. Consequently, the legacy systems cannot provide all the information necessary for effective top level management and control.
As an additional complexity, enterprise systems need user interfaces for front-end operations. For example, in the healthcare industry, administrative staff and health care providers need reliable access to patient records. If the healthcare enterprise has evolved by a series of mergers, the possibility of a reception desk populated with half a dozen different terminals, each accessing a different patient database, and a different accounting system is a certainty, and service and profitability suffers.
Generic computerized solutions that offer an efficient, automated way to integrate an enterprise’s various computerized systems are difficult to implement. Another conventional solution is to implement a custom, computerized interface between the various systems. However, these custom solutions are usually tailored to a specific enterprise environment. As a result, the tailored solutions are not portable into other situations without major modifications. Additionally, these solutions are costly to maintain over time because of inherent difficulties in accommodating change.
Conventional solutions that meet all of the needs for collecting, retrieving, and reporting data in a complex enterprise do not exist. For example, the DASS™ system, available from a SAP AG, of Waldorf, Germany, is intended to automate manufacturing functions. DASS receives information from SAP R/2 package described above. However, DASS does not appear to provide a generic solution to connect a computerized business system to a computerized manufacturing system.
FIG. 10 shows an example legacy enterprise system 10. The legacy system includes as subsystems a SAP system 11, an Oracle™ database 12, one or more legacy applications 13, Lotus Notes™ 14, a Web server 15, and user interfaces 20. The system 10 might also permit access to some functions by a mobile computer (laptop) 30 via a dial-up communications link 40.
More than likely, the legacy system 10 will exhibit one or more of the following problems. All sub-systems cannot communicate with every other sub-system because each sub-system has its own application programming interfaces (APIs). Real-time data interchange among all of the sub-systems may be impossible or extremely difficult because each sub-system stores and views data in a different way and uses different communication protocols. Modified enterprise functions or adding automation for new functions is expensive. Each sub-system is developed with its own peculiar programming language. Users cannot always access all the data all of the time, particularly when the user is mobile. It is difficult to provide top level management with an abstraction of all system information.
What is needed is a system that can integrate various computer systems in an enterprise. The system needs to be able to convey transactional data between any number of databases regardless of their format, context, and access methodology. User interfaces to the databases need to be uniform. In addition, as enterprise functions change, new procedures and transactions must be accommodated in a minimal amount of time without having to redesign and reimplement any of the functional systems. The ideal enterprise integration system should be capable of adapting to any number of computerized functions in a modern complex enterprise.
SUMMARY OF THE INVENTION
The present invention is directed to a system and method for integrating computer systems found in many types of enterprises.
An enterprise integration system is coupled to a number of legacy data sources. The data sources each use different data formats and different access methods. The integration system includes a back-end interface configured for converting input data source information to input XML documents and for converting output XML documents to output data source information.
A front-end interface converts the output XML documents to output HTML forms and the input HTML forms to the XML documents. A middle tier includes a rules engine and a rules database. Design tools are used to define the conversion and the XML documents.
A network couples the back-end interface, the front-end interface, the middle tier, the design tools, and the data sources. Mobile agents are configured to communicate the XML documents over the network and to process the XML documents according to the rules.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1a is a block diagram of a legacy enterprise system;
FIG. 1b is a block diagram of an integrated enterprise system according to the invention;
FIG. 2 is a block diagram of design tools used by the system of FIG. 1b;
FIG. 3 is a block diagram of XML data accesses according to the invention;
FIG. 4 is a block diagram of a back-end interface of the system of FIG. 1b;
FIG. 5 is a diagrammatic of a public interface of the back-end interface of FIG. 4;
FIG. 6 is a block diagram of pooled connections;
FIG. 7 is a flow diagram of a get request;
FIG. 8 is a flow diagram of an update request; and
FIG. 9 is a block diagram of an object of service bridge objects.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Introduction
Our invention provides a robust and scalable environment for integrating legacy enterprise computer systems. The invention integrates databases, transactions, and user interfaces having different formats, contexts, and designs, such as the sub-systems shown in FIG. 1a. We also provide for automated rules based processing.
At the core of our integration system, we utilize XML as a universal data encoding and interchange format. XML (Extensible Markup Language) is a flexible way for us to create common information formats and share both the format and the data on the Internet, the World Wide Web (WWW), intranets, and private local area network. XML, developed by the World Wide Web Consortium (W3C), is “extensible” because, unlike HyperText Markup Language (HTML), the markup symbols of XML are unlimited and self-defining. XML is actually a simpler and easier-to-use subset of the Standard Generalized Markup Language (SGML), the standard for how to create a document structure. XML enables us to create customized “tags” that provide functionality not available with HTML. For example, XML supports links that point to multiple documents, as opposed to HTML links, which can reference just one destination each. These basic interfaces allow our integration system to view, modify and interact with linked legacy applications or legacy data sources.
System Architecture
As shown in FIG. 1b, our enterprise integration system includes the following main components: a back-end interface, a front-end interface, a middle tier, and design tools. The components are connected by a network and mobile agents carrying XML documents. The mobile agents are described in greater detail in U.S. patent application Ser. No. 08/965,716, filed by Walsh on Nov. 7, 1997, incorporated herein in its entirety by reference. As a feature, the agents can travel according to itineraries, and agents can “meet” with each other at meeting points to interchange information.
With our back-end interface, we enable read/write/modify access to existing (legacy) applications and data sources. The back-end interface maps (or translates) data from legacy formats into the XML format used by our enterprise integration system.
The front-end interface enables us to present information to users using standard presentation methodologies. The front-end interface also allows the user to modify information and to generate transactions to initiate enterprise processes or workflow. The front-end interface can be modified to meet changing requirements of the enterprise.
The middle tier uses our mobile agents to provide an infrastructure for highly flexible, robust and scalable distributed applications. The middle tier combines server technology with a customizable business rules engine and an application framework. The middle tier also provides for the deployment of disconnected applications for mobile users. That is, the middle tier allows the mobile user to perform tasks while disconnected from the system.
The design tools support the definition of XML document formats. The design tools also allow us to define mappings of the XML document formats and the legacy data formats, and to provide for the automated generation of forms for user presentation via the front-end interface. These components are now described in greater detail.
Back-End Interface
The back-end interface is composed of one or more service bridges. The service bridges provide highly efficient access to various legacy systems. Hereinafter, we will frequently call the legacy systems “data sources.” We do not care how the legacy systems are programmed, or how their applications are structured. That is, the back-end interface of our integration system provides a generic and uniform access interface to the highly diverse legacy systems without requiring special knowledge of internal, legacy interfaces of the linked systems.
Semantically, we model the back-end interface as an XML document publishing and management system. We see the data source as “publishing or “serving” XML documents containing enterprise information. The back-end allows users to add, update, delete, browse, and search for documents in the data source. We chose this semantic model of interaction because it provides a generic interface through which many disparate legacy systems can be accessed.
A particular data source can manage multiple types of documents, such as customer accounts, purchase orders, work items, work lists, and the like. Any document in any data source can be uniquely identified and retrieved by a document identification (ID). In our implementation, and keeping within the spirit of XML, we use a document identification that is conceptually similar to a Web page...
Universal Resource Locator (URL), although different in detail. As shown, the service bridges include a bridge framework (BF) 113 and a data source-specific runtime access component (RAC) 114. The service bridge is described in greater detail below with reference to FIGS. 4-9.
Bridge Framework
The bridge framework 113 provides generic high level access services for the back-end interface. The framework is relatively independent from the specifics of the linked legacy systems and is implemented with reusable code. The bridge framework performs user authentication, and identifies the user making a request of the data source. The bridge framework also identifies agents 101 making requests, and provides a means to map a generic user identity to specific "logon" information required by any of the legacy data sources, e.g., a username and a password. The bridge framework operates securely such that any sensitive data-source logon information, such as a clear-text password, is encrypted.
Connection Pooling and Document Management
The framework also manages objects involved in establishing and maintaining a connection to the data source, and provides for connection sharing and pooling. Connection pooling and sharing is used when the establishment of a connection or session with the data source is too expensive to perform on a per user basis. The connection pooling and sharing mechanism is based on "user groups." All members of a user group access a particular data source via a shared connection pool. The connections in this pool are established within the user context of a "pseudo-user account."
A pseudo-user account is a special data source account that represents a group of users instead of an individual user. Thus, if we have two user names, "johan@accounting" and "jim@accounting," the two accounting users both access the data source within the context of the accounting pseudo user account. Connection pooling may not be necessary for all back-end data sources, but certainly is required for relational database access.
Document Caching
The bridge framework also provides a tunable caching facility to increase system performance. As stated above, a primary function of the back-end interface is to access legacy data and convert that data into the XML format. The bridge framework maintains XML documents in a cache 115 so that a subsequent request to retrieve the same data can bypass any data access or conversion work overhead by accessing the cached XML document.
The caching in our system is tunable. For a given type of document, a system administrator can specify caching parameters 116 such as whether caching should be enabled, a maximum lifetime before cache entries become stale, a maximum cache size, whether the cache 115 should be a persisted disk and re-used at next server startup. For document types that contain highly volatile data, caching can be disabled or cache entries can be set to expire quickly. For documents containing data that changes rarely, the caching parameters can be set aggressively to retain the documents in the cache.
Runtime Access Component
The runtime access component (RAC) 114 is specific for a particular data source 111. The RAC uses application programming interfaces (APIs) and structures of the legacy data source to access the data and to map the data into the XML format. The exact semantics of how the data are mapped to the XML format vary. For example, the mapping can be for widely used legacy databases, such as, JDBC, JDBT, SAP, or SQL. An example JDBC implementation is described below with reference to FIG. 4. The RAC supports the following database access operations:
Query
The "query" operation retrieves a document from the data source. The caller supplies the id 104 of the document to fetch. The bridge service returns the specified information in the form of a XML document according to one of the standard programming models supported by W3C, for example, a DOM document object or a SAX document object.
DOM (Document Object Model), is a programming interface specification that specifies a tree which applications may then explore or modify. SAX is an event-based tool, more or less 'reading' the document to the application using a set of named methods to indicate document parts. SAX is typically used where efficiency and low overhead are paramount, while the DOM is used in cases where applications need random access to a stable tree of elements. The interface allows us to generate and modify XML documents as full-fledged objects. Such documents are able to have their contents and data "hidden" within the object, helping us to ensure control over who can manipulate the document. Document objects can carry object-oriented procedures called methods.
In the case of a relational database, the query operation maps to a SQL SELECT statement with a WHERE clause specifying which record or records from the database are contain in the document.
Update
The "update" operation modifies existing data in the legacy data source. The caller supplies the id of the document and a XML document containing only the fields to be modified. In the case of the relational database, the update operation maps to a SQL UPDATE statement.
Delete
The "delete" operation removes a document from the data source. The caller supplies the id of the document to delete. In the case of the relational database, the delete operation maps to a SQL DELETE statement.
Add
The "add" operation inserts a new document into the data source. The caller supplies the document in the form of a DOM Document object. The bridge service returns the id of the newly added document. In the case of a relational database, the add operation maps to a SQL INSERT INTO statement.
Browse
The browse operation, also known as "buffering," browses all of the documents in the data source of a certain type. The caller supplies the type of document to browse. The bridge service returns a browse object similar to a JDBC result set. The browse object allows the caller to traverse the results in either direction, jumping to the first or last document, and to re-initiate the browse operation. In the case of a relational database, the browse operation maps to a SQL SELECT statement that returns multiple records.
Search
The search operation browses the data source for all documents of a certain type that meet a predefined search criteria. The search criteria can be a list of fields and values which the caller wants to match against records in the database. For example, the caller might request all customer records that contain a "state" field matching the string "MA." The caller supplies the type of document to browse as well as a document containing the fields to be matched. The bridge service returns a browse object as above. In the case of a relational database, the search operation maps to a...
SQL SELECT statement in which the WHERE clause contains the LIKE operator.
Front-End Interface
The front-end interface 120 is responsible for user presentation and interaction. The front-end interface uses “forms” to allow users to view and modify information. As an advantage, the front-end interface provides a “thin” user interface, with simple interactivity that can easily be customized as the environment in the enterprise changes. The front-end forms use HTML 121, HTTP 122, Javascript, Java servlets 123, Java applets, and plug-ins as necessary. Being Web based, the user 103 can use any standard browser 124 to interact with the system from anywhere there is an Internet access point.
HTTP Communications
The HTTP is used as the communication mechanism between agents and users. The user 103 browses and modifies information, and initiates processes via the web browser 124. User requests are routed to agents 101 via HTTP and through the Java servlet. The servlet 123 in turn communicates with a front-end service bridge 125 that serves as an interface for the agents 101.
The servlet/service bridge combination 123/124 supports the establishment of user sessions that are the channel for two-way communication between the user and the agents. Within the context of a session, the user can send HTTP GET or POST requests to the agents, and the agents process such requests, and send back an HTTP response. Sessions allow the user to wait for an agent to arrive and allow an agent to wait for a user to connect.
HTML Form Style Sheets
We accomplish the display of information to users with HTML, web pages, and web forms. As stated above, the information that agents retrieve from data sources is in the form of the XML documents 102. To format the XML documents into a form suitable for users, the front-end servlet 123 converts the XML document to a HTML page using a style sheet 126, e.g. XSL, JSP or some other data replacement technique as described below. The result of this conversion is the HTML page containing the information in a user-friendly format. By applying the style sheet, the servlet recognizes and replaces certain data from the XML document and converts the data to HTML form.
For example, a particular XML document 102 includes the following information:
```
<customer>
<firstname>John</firstname>
<lastname>Smith</lastname>
</customer>
```
The HTML style sheet 126 for this document is as follows:
```
<html>
<h1>John</h1>
<h2>Smith</h2>
</html>
```
After applying the style sheet to the XML document, the resultant HTML form 121 would appear as:
```
<html>
<h1>John</h1>
<h2>Smith</h2>
</html>
```
The style sheet supports accessing all of the elements and attributes in the XML documents, and iteration over groups of repeating elements.
For example, an XML document contains:
```
<customer type="preferred">
<firstname>John</firstname>
<lastname>Smith</lastname>
</customer>
```
The “type” attribute of the customer is accessed by using a syntax such as the following:
```
'customer.type["type"]'
```
The “lastname” element of the second customer is accessed using a syntax such as ‘customer[1].lastname’ which yields the value “Jones.” To iterate over all of the customers and access their “type” attributes, an expression such as:
```
'iterate(i=customers.customer) { i.type }
```
can be used to produce first the string “preferred,” and then “standard.”
Validation
The front-end interface also supports the validation of user entered information. Field validation information supplies some immediate feedback and interactivity to the user. Field validation also increases application efficiency by detecting common errors within the web browser process before any other network traffic is incurred or application logic is executed. Client side validation can be broken down into two related levels.
Field-Level
Field-level validation performs simple checks on user entered data to validate that the information is of the correct format or data type. For example, field-level validation can validate that a user enters numeric values in a particular field, or uses a proper date format. We implement field-level validations with Javascript. A library of common validations is supplied as a script file on a web server. The library has a "js" file extension. This script file can be included into
HTML forms as desired using the `<script>` HTML tag. Validation is enabled for a field by indicating the name of an appropriate validation routine, e.g. `onChange`, within an event handler of the field. The event handler is triggered when an INPUT field changes. Setting up validation for a field requires HTML coding as follows:
```html
<input type="text" name="birthdate" onChange="validateDate(birthdate)"/>
```
The validation library provides routines for common data types such as dates, times, currency, etc. The validation library can also provide a pattern matching ability allowing user input to be matched against arbitrary patterns, e.g. a pattern `###` to match a monetary amount.
Cross-Field Validation
Cross-field validation allows for more complex validations. In this type of validation, the contents of one field depends on the contents of another field. For example, cross-field validation can detect a scenario where a telephone number must be entered. Such validation usually requires a more detailed knowledge of the requirements of the application.
Middle Tier
The middle tier provides the “glue” that links the back-end and the front-end interfaces. The middle tier utilizes the mobile agents to communicate with the interfaces. The middle tier also provides support for disconnection applications and users. In addition, the middle tier customizes the system to the needs of specific enterprise functions without actually having to reprogram the legacy systems.
The middle tier supports the automation of complex workflow and complex validations of data that may require access to multiple data sources. As a feature, the middle tier uses a rules engine (RE) operating on rules stored in a database. The rules are defined in a rules language, and can be retrieved by the agents as needed.
In a typical scenario, the user launches an agent due to interaction with the browser. The agent carries an XML document, e.g., a purchase order, to the rules database. The agent retrieves the appropriate rule for processing the order, such as a purchase order workflow. The agent then interprets the rule to appropriately route the document to the locations in the network specified by the rule. The rule can include a travel itinerary, as well as instructions on how to interact with the data sources.
As an advantage, the operation of our system is always current. As rules change so does the operation of the system. The agents always execute according to the current state of the rules database.
Design Tools
As shown in FIG. 2, the primary purpose of the design tools is to generate XML document type definitions (DTD) to specify 143 data mappings, etc., i.e., RACs, to encode 144 rules, and to design 145 user interfaces.
Document Type Definitions
The step identifies the different types of document information (DTD) that needs to be shared by the various data sources of the back-end and the browser. This information is specified in the DTDs. For example, to share purchase order information between systems, the type of information needed in a purchase order needs to be identified, then that information needs to be encoded in a corresponding DTD. In one embodiment, the design tools use the service bridge to extract schemas from the data sources.
Data Mapping
After a data source independent data format has been generated, the mappings between the XML format and legacy formats for a particular database needs to be specified as shown in FIG. 3. A query operation to a relational database involves extracting the schema of the database by generating a SQL runtime access component (RAC) which makes the JDBC calls to the database, converting the resulting data into the XML format, and handing the XML document to an agent. The access components can be implemented as Java code. The agent delivers the XML to the front-end conversion to the HTML form so that the data can be viewed by the user using a standard browser.
Conversely, the update operation converts the HTML form to the corresponding XML document. The XML document is converted to a legacy format and the RAC modifies the data source using its schema. For other legacy data sources that are not specified by a schema or some other metadata, the mapping may need to be done by means that access the APIs directly.
Rule Encoding
After the data format definition is generated, and the RAC has been specified to access the appropriate data source, the next step is to encode what agents are going to do with the information. In a simple data replication system, an agent may retrieve modified records from a master database, travel to the location of a backup database, and then update the backup database with a copy of the modified record. This process involves the encoding of a specific rule.
Designing the User Interface
As shown in FIG. 2, generating the user interface requires three steps: manipulating document type definitions (DTD), importing DTD, and generating DTD from database schema.
Authoring DTD
The design tools allow the system designer to define, design, and manipulate XML and HTML DTDs. A DTD defines the name of the following document elements: the contents model of each element, how often and in which order elements can appear, and start or end tags can be omitted. The possible presence of attributes and their default values, and the names of the entities.
Because the DTDs represent many different types of documents in the system, this step essentially defines the data types of the enterprise’s computerized applications. As an advantage, the resulting DTDs do not directly tie the system to any specific legacy data source, nor do the definitions preclude the integration of other legacy systems in the future.
DTD Import
The tools also allow one to import already existing DTD definitions. Such functionality can be used in environments where DTDs have already been defined for standard document types. These DTDs may have been defined by standards bodies or a designer of the legacy system.
DTD generation from Database Schema
This part of the tools automatically generate DTDs from existing database schema.
Data Mappings
Query Mapping
A query mapping enables an agent to retrieve information from a legacy data source. In the case of a relational database, this mapping specifies the contents of the SELECT statement, including any information relevant for a table join. A query mapping for a purchase order may involve accessing a purchase order table, a customer table, and a product catalog table.
Update Mapping
An update mapping allows an agent to modify information in the data source. This involves specifying the contents of an UPDATE statement. An update mapping for a purchase order involves updating the purchase order table, but not modifying the customer table or the product catalog table.
Delete Mapping
A delete mapping allows an agent to delete information in the data source. This involves specifying the contents of a DELETE statement. A delete mapping for a purchase order involves deleting a record or records from the purchase order table, but not modifying the customer table or the product catalog table.
Add/Create Mapping
An add/create mapping allows an agent to add information to the data source. This involves specifying the contents of an INSERT statement. An insert mapping for a purchase order involves adding a record or records to the purchase order table, but not modifying the customer table or the product catalog table.
Schema Extraction and Caching
In order to allow for mapping between a legacy database schema and XML DTD formats, the mapping design tool extracts the schema from legacy databases. Because schema extraction is an expensive and time-consuming task, the tools allow one to save extracted schemas on a disk for subsequent use.
Form Generation
The tools will also allow one to automatically generate a form from a DTD. Such a form may require minor modifications to enhance the physical appearance of the form. For example, color or font size of text can be adjusted to enhance usability.
Embedding Binary Data in XML Documents
Some enterprise applications may need to retrieve arbitrary binary data from the data source. For example, a legacy database contains employee information. Included with that information is a picture of the employee in standard JPEG format. The employee information is stored as a single table named "employees," which has a schema as Table 1, where the field <image> represents the picture:
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>HireDate</th>
<th>Photo</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John Smith</td>
<td>1/1/96</td>
<td><image></td>
</tr>
</tbody>
</table>
The XML document that retrieves the above table appears as follows:
```xml
<employees>
<employee>
<ID>1</ID>
<name>John Smith</name>
<hiredate>1996-01-01</hiredate>
<photo href="http://server/directory/john.jpg" /> ...
</employee>
...
</employees>
```
However, there are a number of problems with this type of approach. First, it is the responsibility of the user to issue the proper additional commands to retrieve the linked document before it can be displayed, e.g., the user must click on the URL of the picture. Second, the DTD for the XML document must specify the URL. For most legacy databases, it is unlikely that the records storing the binary data are accessible via an HTTP URL. Furthermore, the binary data is transported through the system by a follow on transport, such as HTTP. For reliability, security, consistency, and other reasons we prefer to carry all data, including binary data with the agents.
To allow the servlet to generate an agent that can access the binary data, we define a new type of URL. The new URL incorporates the location of the binary data, as well as a unique "name" that can be used to retrieve the binary data. The URL contains the hostname of the data source, a service name, an action name that can be used to perform the retrieval of the binary data, and a document identification referring to the binary data. This still results in a fairly complex URL.
Using multiple requests to retrieve the binary data is inconsistent with our agent model. Agents try to use the network effectively by batching data into fairly large self-contained packets. This is very different than the hypertext model used on the web in which a single page display can lead to multiple network requests.
Compound Documents
In an alternative solution, we define a compound document. In a compound document, the binary data is embedded in the same document as the textual XML data. This approach is consistent with our agent driven system that attempts to transport data as larger batches. Compound documents can be built in two ways.
Embed Binary Data into XML Text Element
The binary data is embedded directly into an XML text element. This can be done as long as the binary data is encoded in such a way that the data only contain XML characters. Such an encoding could be based on the Base64 encoding. With Base64, special characters, such as "<" and ">" are replaced with equivalent entities (i.e., < and >). We also can use a character data (CDATA) section to work around the problem of illegal characters within the Base64-encoded data. We may want to prefix the embedded binary data with standard mime headers that specify content type, encoding, and name. Such a format for the photo element appears as follows:
```xml
<employee>
<ID>1</ID>
<name>John Smith</name>
<photo href="http://server/directory/john.jpg" /> ...
</employee>
```
It should be noted that this alternative increases the size of the binary data by 33% as well as increasing the overhead to encode and decode the data.
This alternative requires that a SQL RAC extracts the binary data and encodes the data into Base64, and then adds the encoded data to the XML document with the proper mime headers.
Compound Document Encoded as Mime Document
Another alternative, embeds both the XML document and the binary data into separate parts of a multipart mime document. Each part of the overall document has a Content-ID which is referenced from a standard XML link, in part, such a format appears as follows:
```
Content-Type: multipart/related; boundary="0--XXXXX"
--XXXX
Content-Type: text/xml
Content-ID: doc
<?xml version="1.0" encoding="ISO-8859-1"?>
<Photo href="cid:photo/>
--XXXX
Content-Type: image/jpeg
Content-Encoding: base64
Content-Name: john.jpg
Content-ID: photo
9/4AAQSkZJgEASABIAAD?
--XXXX
```
With this alternative, the binary data may not need to be encoded. However, this requires that agents also retrieve MIME documents via the RAC.
JDBC Service Bridge
FIG. 4 shows details of a preferred embodiment of a service bridge 400 of the back-end interface 110 for accessing a data source. In this embodiment, JDBC is used to access a SQL type of database. The bridge 400 includes a public interface 410, JDBC run-time access component (RAC) 420, XML-SQL data mapping 430, and a document cache 440 as its main components.
Public Interface
As stated above, the public interface 410 provides the means by which agents access the data sources 111. The public interface allows data retrieval, modification, and addition. As an advantage, the public interface 410 makes no assumptions about how data in the legacy database 111 is sourced or maintained. Instead, we make the public interface resemble the GET/PUT model of HTTP.
JDBC Run-Time Access Component
The JDBC access component 420 is responsible for establishing and managing JDBC connections, building and executing SQL statements, and traversing result sets. This component works entirely within the context of JDBC and SQL.
XML-SQL Data Mapping
The XML-SQL data mapping 430 uses the mapping information generated by the design tools 140 to map data between XML and SQL.
Document Cache
The document cache 440 operates entirely with XML documents. XML documents that have been retrieved from the data source can be cached for fast future retrieval. The caching services are configurable so that maximum cache sizes and cache item expiration times can be specified. Caching can be disabled for certain classes of documents which contain highly volatile information.
FIG. 5 shows the public interface 410 in greater detail. The interface supports four basic types of accesses, namely get 510, put 520, add 530, and delete 540.
At the heart of the interface is the document id 104. The document id is a string which uniquely identifies every document instance within the data source. The document id can be thought of as corresponding to the URL of a World Wide Web document, or to the primary key of a record in a database. Although the id has a different format than a URL, it does serve as a document locator.
In order to interact with information in the legacy data source, an agent needs to provide the id for the document containing the information. The id contains multiple sections of information and follows the following pattern:
The first character of the id string specifies a separator character (S) 501 that is used to separate the different sections that make up the document id, e.g., a colon (:). This character is used in conjunction with a Java StringTokenizer to parse the document id. The subsequent information in the id includes name=value pairs (N, V) 502. One pair of 502 specifies a document type, e.g., ".type=cat_list:"
In most common cases, the id 104 also contains a key specifying the exact document instance in order to uniquely identify an individual document in a data source. For example, in a document containing customer information, this key contains a data source specific customer number or a customer id. Within the service bridge, this key is mapped to a WHERE clause of a SQL statement. For example, an agent can request customer information for a particular customer by specifying an id string as follows:
".type=customer/key=SMITH:"
This request results in a SQL query to the database that appears as follows:
```
SELECT * FROM Customers WHERE Customers.ID=SMITH
```
The exact semantics of how they key is mapped into the resultant SQL statement is specified by the design tools 140.
The key portion of the id can be composed of multiple pieces of information separated by, for example, commas. Such a key is used in cases in which the WHERE clause of the corresponding SQL query needs multiple pieces of information to be specified by the agent. An example of this is a document containing a list of customers, where the customers names are within a certain alphabetic range, for example, "all customers whose last names begin with the letters A or B. Such a document has an id as follows:
".type=cat_list_by_name/key=ABBzzz:"
In this case, the request would map into a SQL statement resembling the following:
```
SELECT * FROM Customers WHERE Customers.LastName BETWEEN A, Bzzz
```
Implementation Details of the Service Bridge
Database Access
The service bridge is responsible for performing any authentication necessary in order to establish a database
connection. This may involve supplying a database specific username and password or other login information. When a database access (get, put, add, delete) is made by an agent, the bridge examines the agent’s runtime context to determine the user identity associated with the agent.
After the agent’s identity has been ascertained, the service bridge maps the identity into simultaneous database-specific user identification using a mapping table generated by the design tools. For example, the mapping maps the user identity “steve@accounting” into an Oracle username “steve.”
In order to establish a connection to a database on behalf of a user, the service bridge retrieves both the username and clear-text password for the corresponding database user account. In such cases, the clear-text password is stored in the identity-mapping table. For security reasons, the table is encrypted on disk using a public/private key pair.
Connection Management
To enhance performance and scalability, the service bridge supports database connection pools. This means that multiple users share a common pool of JDBC connections. Establishing a database connection can be a slow and relatively expensive operation. The use of shared connection pools decreases this expense.
The basis for this connection sharing are “users groups.” When an agent attempts an operation which requires a connection to a database, the service bridge performs that operation using a connection established in the context of a special “pseudo-user” account. The pseudo-user is a database system account that represents not an individual user, but instead a particular group of users. A pool of such pseudo-user connections is available for use by all of the agents of the group. The service bridge generates and maintains a connection pool for each distinct group of users who access the bridge.
Fig. 6 shows agents 101 for three users tom, joe and david 601–603 accessing the data source 111. Two of the users, tom@users and joe@users, are members of a users group. The third user, david@managers, is a member of a “managers” group. When these agents attempt to access the database, the two members of the users group share a connection pool 610 that was established with the credentials of the “users” pseudo-user. The third agent will communicate with the database using a separate connection pool 620 established with the credentials of the “managers” pseudo-user.
A connection pool for a particular group is generated when a member of the group makes the first access request. Connections within the pool are constructed as needed. The service bridge does not pre-allocate connections. After a configurable, and perhaps long period of inactivity, the connection pool is closed to free database resources. If a connection pool for a particular group has been closed due to inactivity, then any subsequent request by a member of that group results in the generation of a new pool. When a request is completed, the connection allocated for that request is returned to the pool. A maximum number of connections in a pool can be specified. If no connections are available when a request is made, then the request is blocked until a connection becomes available.
Statement Construction and Execution
The actual generation and execution of SQL statements is performed by a separate “modeler” object. The modeler object is generated by the design tools 140. For each type of document used in the system, there is a distinct modeler object. Each modeler knows how to construct exactly one type of document. During the design process, one specifies what information is to be retrieved from the database, and how to map the information into an XML document. The design tools serialize and save the modeler objects in a “ser” file. At runtime, the service bridge loads and de-serializes the modeler objects from the “ser” file. The resultant modeler objects are able to perform all of the data access and mapping functions required to retrieve information from the data sources. As stated above, SQL to XML data mapping is performed by the modeler object designed for a particular document type.
Data Caching
To improve the performance of document retrieval, the data service caches database information as converted XML documents. When a first request is made to retrieve a document, the service performs the SQL access and SQL to XML data mapping as described above. The resultant XML document is added to the cache of documents 440 maintained by the service bridge. Any subsequent request to retrieve the document will be satisfied by retrieving the document from the cache, bypassing the need for an additional expensive database access and mapping.
When an update or addition is made to a data source, the cache is updated to reflect the new information. The update to the cache is made only after the SQL statement performing the update of the end database has been completed successfully. This prevents the cache from storing information that has not been committed to the database due to errors or to security restrictions.
The XML document cache is configurable to specify a maximum size of the cache, the maximum amount of time a single document can be retained in the cache before it becomes stale, and whether the cache should be persisted to disk, in which case the cache can be re-used after a server restart. One can also customize how different classes of documents are cached. If a document represents highly volatile information, then caching can be disabled for that class of document. If a document class is completed (or virtually) static, then documents of that class can be cached for a very long time.
Execution Flow
The following section describes the execution flow for basic database access requests. Fig. 7 shows the steps 700 of a “get” or retrieval access in greater detail. After the request is received from the agent 710, the caller and document identity are determined 720, 730. The group specific cache is identified 740, and the cache is checked 750. If the cache stores the document, return the document in step 755. Otherwise, locate the XML-SQL mapping 760, construct the select SQL select statement 770, retrieve the connection 775, and execute the statement in step 780. Next, the result set is “walked” 785, fields are extracted 790 to build the XML document 793, the document is cached 796 and returned to the agent in step 798. Fig. 8 shows the steps 800 for the addition (add) and modification (put) similar to the get steps. The delete request simply deletes data from the database as shown at 540 in Fig. 5.
Run-time Object Hierarchy
Fig. 9 shows the run-time hierarchy 900 of objects of the service bridge 110. The objects can be classified as data source independent 901, and data source dependent 902. The data source independent object 901 includes data source factory object 910 indexed by group name, group specific data source objects 920, document factory objects 930 (one per document), document cache objects 940, document builder objects 950, connection pool objects 960, mapping table objects 970, document manager objects 980, and the data source manager objects 990. The data source dependent
object 902 include source connection 991, string authentication 992, document map 993, and specific driver objects 994.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
We claim:
1. An enterprise integration system, comprising:
a back-end interface, coupled to a plurality of data sources, configured to convert input data source information to input XML documents and to convert output XML documents to output data source information, wherein the plurality of data sources use different data formats and different access methods;
a front-end interface including means for converting the input XML documents to input HTML forms and for converting output HTML forms to the output XML documents;
a middle tier including a rules engine and a rules database; design tools for defining the conversion and the XML documents;
a network coupling the back-end interface, the front-end interface, the middle tier, the design tools, and the data sources;
a plurality of mobile agents configured to communicate the XML documents over the network and to process the XML documents according to the rules.
2. The system of claim 1 wherein each XML document is identified by a document identification.
3. The system of claim 2 wherein the document identification is a character string.
4. The system of claim 3 wherein the character string includes a plurality of sections, and a first character of the string is a section separator.
5. The system of claim 4 wherein one of the sections stores a document type.
6. The system of claim 3 wherein one of the sections stores a key to an instance of the XML document in one of the data sources.
7. The system of claim 1 wherein the back-end interface further comprises:
a public interface;
a document cache; and
a run-time access component.
8. The system of claim 7 wherein the run-time access component generates access requests for the plurality of data sources.
9. The system of claim 8 wherein the access requests include query, update, delete, add, browse, and search.
10. The system of claim 7 wherein the public interface forwards the input XML document to the plurality of the mobile agents for distribution, and the public interface receives the output XML documents for storing in the plurality of data sources.
11. The system of claim 7 wherein the document cache includes caching parameters.
12. The system of claim 7 wherein the caching parameters include a maximum lifetime for each cache entries, a maximum cache size, and a persistency indicator.
13. The system of claim 1 wherein the XML documents include binary data.
14. The system of claim 13 wherein the binary data is embedded as a compound document.
15. The system of claim 14 wherein the compound document embeds the binary data as an encoding in a character set.
16. The system of claim 14 wherein the compound document embeds the binary as a MIME document.
17. The system of claim 13 wherein the binary data is referenced by a Universal Resource Locator.
18. The system of claim 1 wherein the input documents are presented to a browser.
19. The system of claim 1 wherein the back-end interface performs user authentication.
20. The system of claim 1 wherein the back-end interface supports database connection pools.
21. A method for integrating a plurality of data sources, comprising:
converting input data source information to input XML documents and converting output XML documents to output data source information, wherein the plurality of data sources use different data formats and different access methods;
converting the input XML documents to input HTML forms and converting output HTML forms to the output XML documents;
providing a rules engine and a rules database;
defining the converting and the XML documents;
communicating the XML documents over a network using mobile agents; and
processing the XML documents by the mobile agents according to the rules database.
* * * * *
|
{"Source-Url": "https://patentimages.storage.googleapis.com/ed/68/50/64173ab17d5a4f/US6810429.pdf", "len_cl100k_base": 11449, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 27754, "total-output-tokens": 13334, "length": "2e13", "weborganizer": {"__label__adult": 0.0003266334533691406, "__label__art_design": 0.0005245208740234375, "__label__crime_law": 0.00048232078552246094, "__label__education_jobs": 0.0014085769653320312, "__label__entertainment": 0.00010395050048828124, "__label__fashion_beauty": 0.00016307830810546875, "__label__finance_business": 0.002857208251953125, "__label__food_dining": 0.0003159046173095703, "__label__games": 0.0007195472717285156, "__label__hardware": 0.0021305084228515625, "__label__health": 0.0003170967102050781, "__label__history": 0.0003535747528076172, "__label__home_hobbies": 7.814168930053711e-05, "__label__industrial": 0.0010232925415039062, "__label__literature": 0.0002548694610595703, "__label__politics": 0.00024116039276123047, "__label__religion": 0.0003306865692138672, "__label__science_tech": 0.08001708984375, "__label__social_life": 5.352497100830078e-05, "__label__software": 0.070068359375, "__label__software_dev": 0.83740234375, "__label__sports_fitness": 0.0001571178436279297, "__label__transportation": 0.0005125999450683594, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58309, 0.02961]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58309, 0.58067]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58309, 0.86722]], "google_gemma-3-12b-it_contains_pii": [[0, 901, false], [901, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 4117, null], [4117, 4117, null], [4117, 4631, null], [4631, 4944, null], [4944, 4944, null], [4944, 11886, null], [11886, 18434, null], [18434, 25326, null], [25326, 29704, null], [29704, 35862, null], [35862, 41262, null], [41262, 46790, null], [46790, 54021, null], [54021, 58309, null]], "google_gemma-3-12b-it_is_public_document": [[0, 901, true], [901, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 2575, null], [2575, 4117, null], [4117, 4117, null], [4117, 4631, null], [4631, 4944, null], [4944, 4944, null], [4944, 11886, null], [11886, 18434, null], [18434, 25326, null], [25326, 29704, null], [29704, 35862, null], [35862, 41262, null], [41262, 46790, null], [46790, 54021, null], [54021, 58309, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58309, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58309, null]], "pdf_page_numbers": [[0, 901, 1], [901, 2575, 2], [2575, 2575, 3], [2575, 2575, 4], [2575, 2575, 5], [2575, 2575, 6], [2575, 2575, 7], [2575, 4117, 8], [4117, 4117, 9], [4117, 4631, 10], [4631, 4944, 11], [4944, 4944, 12], [4944, 11886, 13], [11886, 18434, 14], [18434, 25326, 15], [25326, 29704, 16], [29704, 35862, 17], [35862, 41262, 18], [41262, 46790, 19], [46790, 54021, 20], [54021, 58309, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58309, 0.00787]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
91d139bd80dc721618942f3fa4266ef09d3afbd6
|
Composing features by managing inconsistent requirements
Conference or Workshop Item
How to cite:
For guidance on citations see FAQs.
Version: Not Set
Link(s) to article on publisher’s website:
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
Composing Features by Managing Inconsistent Requirements
Robin Laney, Thein T. Tun, Michael Jackson, and Bashar Nuseibeh
Centre for Research in Computing
The Open University
Walton Hall, Milton Keynes MK7 6AA, UK
{r.c.laney, t.t.tun, m.jackson, b.nuseibeh}@open.ac.uk
Abstract. One approach to system development is to decompose the requirements into features and specify the individual features before composing them. A major limitation of deferring feature composition is that inconsistency between the solutions to individual features may not be uncovered early in the development, leading to unwanted feature interactions. Syntactic inconsistencies arising from the way software artefacts are described can be addressed by the use of explicit, shared, domain knowledge. However, behavioural inconsistencies are more challenging: they may occur within the requirements associated with two or more features as well as at the level of individual features. Whilst approaches exist that address behavioural inconsistencies at design time, these are over-restrictive in ruling out all possible conflicts and may weaken the requirements further than is desirable. In this paper, we present a lightweight approach to dealing with behavioural inconsistencies at run-time. Requirement Composition operators are introduced that specify a run-time prioritisation to be used on occurrence of a feature interaction. This prioritisation can be static or dynamic. Dynamic prioritisation favours some requirement according to some run-time criterion, for example, the extent to which it is already generating behaviour.
Key words: Feature Interaction, Pervasive Software, Event Calculus, Problem Frames
1 Introduction
Given a good description of requirements for a feature-rich system, there are advantages, including scalability and traceability [3,14,27,28,19], in solving the feature sub-problems in isolation before composing the partial solutions to give a complete system. Deferring the composition problem supports a better separation of concerns between requirements analysis and the design phase, and is in line with an iterative approach to development [22,12].
The composition problem also raises a number of questions: Are the requirements to be composed consistent with each other? Do the specifications to be composed share assumptions about their environment? Do they embody consistent models? How do we deal with interference between the effects of features
on the system’s environment? We focus on the first and last of these questions, but in doing so address the others to varying degrees.
The contribution of this paper is an approach to resolve, at runtime, undesirable feature interactions arising from inconsistent requirements. Runtime resolution techniques have many advantages over compile time techniques, including minimal weakening of the requirements, and allowing features developed by disparate developers to plug and play [17,4].
Our approach synthesises two complementary techniques: (i) a form of temporal logic called the Event Calculus [18,26], and (ii) a way to compose problems and solutions called Composition Frames [19]. We use a version of the Event Calculus [18,26] to express requirements and domain properties, and systematically derive feature specifications in a way that makes inconsistencies more explicit. We add a Prohibit(...) predicate to the Event Calculus, and use it in feature specifications to prohibit events over specific periods of time, facilitating non-intrusive composition of features. Composition Frames, introduced in [19], are used to mediate between the features at runtime, and provide an argument showing that they satisfy a family of weakened conjunction requirements.
The paper is organised as follows. In Section 2, we present a motivating example whilst giving a brief introduction to the Problem Frames approach and also the Event Calculus. In Section 3, we begin by showing how to express requirements and domain properties in the Event Calculus before deriving machine specifications. We then consider the semantics of requirements composition and discuss Composition Frames as a way of reasoning about the relationship between composed requirements and composed specifications in Section 4. In Section 5, we compare our work with other approaches. In Section 6, we discuss some lessons about the composition of requirements, of solutions, and their relationship. We conclude in Section 7 and present future work.
2 Background
In this section we introduce the problem frames notation and philosophy, and present an example system that will be used in Sections 3 and 4 to illustrate our technique. We then give an introduction to the Event Calculus and motivate its choice as a tool for addressing some composition concerns.
2.1 Introductory Example
Throughout this paper we will use an example that involves developing the specification for a simple “smart home” application [17]. In order to facilitate convenient living, household appliances, such as air conditioners, security alarms and windows are increasingly connected to home digital networks. The functioning of these appliances is controlled by complex software systems known as smart home applications. For example, a security feature may switch on and off lights of the home when the homeowners are away, to give an impression that the house is occupied. The specific example discussed in this paper has two features, and
is mainly concerned with the control of a motorized awning window, illustrated below.
**Requirements for features.** The requirement for one feature is concerned with the house security (SR), whilst the requirement for the other feature is concerned with the climate control and energy efficiency of the house (TR). Informal descriptions of these requirements are given below.
**SR:** “Keep the awning window shut at night.”
**TR:** “If it is hot indoors (i.e. hotter than the required temperature) and cold outside (i.e. colder than the temperature indoors), open the awning window.”
Analyzing a requirement, such as SR or TR, using the Problem Frames approach involves identifying the problem context and matching it to one of several well-known diagram forms. Starting with the SR requirement, Fig. 1 shows the problem diagram for the security feature. A problem diagram such as this shows the relationship between descriptions of (i) a *machine domain* denoted by a rectangle with two vertical stripes, (ii) problem world domains, denoted by plain rectangles and (iii) a requirement denoted by a dotted oval. The machine domain implements a solution in order to satisfy the SR requirement. In our discussions, we may refer to a machine as a feature specification or just specification. The problem domains are entities in the world that the machine must interact with, such as Time Panel and Window in Fig. 1, in satisfying the *requirement*, in this case, SR. The thick lines are called phenomena (a and b) representing shared states and events between the domains involved. Dotted lines are requirement phenomena (a and c). Broadly speaking, SR in Fig. 1 says that if the time panel indicates night time, we expect the window to be shut.
The problem diagram for the climate control and energy efficiency feature in Fig. 2 is similar. Again, broadly speaking, the requirement is that if the...
desired temperature and the indoors and outdoors temperatures are in a certain relationship, we expect the window to be opened.
Having informally described the requirements, we now examine the properties of the problem and machine domains.
**Problem Domains.** In Fig. 1, when the time falls between NBegin and NEnd of the Time Panel (TiP) domain, it is night. The prefix TiP! specifies that values of NBegin and NEnd are controlled by Time Panel. The awning window (W), in both Fig. 1 and Fig. 2, has the following properties. When the window sash has a zero degree angle on the window frame, the window is fully shut (WindowShut is true). When the window sash has a twenty degree angle on the window frame, the window is fully open (WindowOpen is true). When the event tiltOut is fired, the window sash starts to tilt out until either the window is fully open, or tiltIn is fired. Similarly, when the event tiltIn is fired, the window sash starts to tilt in until either the window is fully shut, or tiltOut is fired. OutTemp is the temperature outdoors and InTemp is the temperature indoors. NiceTemp of the Temperature Panel (TeP) domain indicates the temperature level desired by the house owner.
**Machine Domains.** When describing the machines individually, it is necessary to ensure that the specification for each feature’s machine, along with the descriptions of the appropriate domains, is sufficient to establish that each requirement is satisfied. The obligation to demonstrate this is known as the frame concern, and the case that it holds must be made either formally or informally depending on context. In Section 3.2, we discuss a way to do this based on deriving the feature specifications from formal descriptions of the requirements and the window domain.
Each of these individual features in isolation can satisfy its own requirement. However, they will conflict whenever the TR machine needs to open the window at night time to adjust the indoors temperature by admitting cooler air, and the SR machine needs to keep the window closed. This conflict is dynamic, in the sense that it will only occur in certain circumstances. Our refinement of require-
Table 1. Some Event Calculus Predicates
<table>
<thead>
<tr>
<th>Formula</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>Initiates(α, β, τ)</td>
<td>Fluent β starts to hold after action α at time τ</td>
</tr>
<tr>
<td>Terminates(α, β, τ)</td>
<td>Fluent β ceases to hold after action α at time τ</td>
</tr>
<tr>
<td>Initially(β)</td>
<td>Fluent β holds from time 0</td>
</tr>
<tr>
<td>τ₁ < τ₂</td>
<td>Time point τ₁ is before time point τ₂</td>
</tr>
<tr>
<td>Happens(α, τ)</td>
<td>Action α occurs at time τ</td>
</tr>
<tr>
<td>HoldsAt(β, τ)</td>
<td>Fluent β holds at time τ</td>
</tr>
<tr>
<td>Clipped(τ₁, β, τ₂)</td>
<td>Fluent β is terminated between times τ₁ and τ₂</td>
</tr>
<tr>
<td>Trajectory(β₁, τ, β₂, δ)</td>
<td>If Fluent β₁ is initiated at time τ then fluent β₂ becomes true at time τ + δ</td>
</tr>
</tbody>
</table>
ments into specifications in Section 3.2 highlights this conflict by identifying the events, occurrence of which at certain times may lead to a failure to satisfy some requirement. Therefore, a significant strength of this approach is that it identifies the ways in which a feature could interact with other feature(s) in terms of event occurrences without necessarily knowing what those other features are. Having derived the specification for each feature we must compose the specifications in a way that resolves this conflict at run time. We propose such a technique in Section 4.
2.2 The Event Calculus
The Event Calculus [26], first introduced in [18], is a logic system grounded in the predicate calculus. The calculus relates events and event sequences to ‘fluents’, which denote states of a system. It has been used as a way of permitting inconsistency in reasoning about requirements [25]. In our approach to this example problem we use event sequences to describe feature machine behaviours; fluents to describe problem domain states; and we use the rules by which events cause state changes to describe the given properties of the problem domains. Requirements are described as combinations of fluents capturing the required states of the problem world.
We will work with a version of the calculus based on Shanahan [26] that is intended to be simple whilst fully supporting the contribution of Section 3. Since the machines for individual features are executed sequentially, the Event Calculus does not have to deal with concurrent events. Concurrency that arises due to composition of multiple features are handled by the composition controller introduced in Section 4. Table 1, also based on Shanahan [26], gives the meanings of the elementary predicates of the calculus.
The EC rules in Fig. 3, taken from Shanahan [26], are a way of stating that the fluent β holds if: it held initially and nothing has happened since to stop it holding (EC1); the event α has happened to make the fluent hold and nothing has happened since to stop it holding (EC2); or, the event α happened that caused some fluent β₁ to hold, that in turn, after a period of time δ caused this fluent β to hold, and again nothing has happened since to stop the second fluent
Laney, Tun, Jackson and Nuseibeh
\[ HoldsAt(\beta, \tau_1) \leftarrow Initially(\beta) \land \neg Clipped(0, \beta, \tau_1) \]
(\text{EC1})
\[ HoldsAt(\beta, \tau_2) \leftarrow Happens(\alpha, \tau_1) \land Initiates(\alpha, \beta, \tau_1) \land \tau_1 < \tau_2 \land \neg Clipped(\tau_1, \beta, \tau_2) \]
(\text{EC2})
\[ HoldsAt(\beta, \tau_3) \leftarrow Happens(\alpha, \tau_1) \land Initiates(\alpha, \beta, \tau_1) \land Trajectory(\beta_1, \tau_1, \beta, \delta) \land \tau_2 = \tau_1 + \delta \land \tau_1 < \tau_2 \leq \tau_3 \land \neg Clipped(\tau_1, \beta_1, \tau_2) \land \neg Clipped(\tau_2, \beta, \tau_3) \]
(\text{EC3})
\[ Clipped(\tau_1, \beta, \tau_2) \leftrightarrow \exists \alpha, \tau \left[ Happens(\alpha, \tau) \land \tau_1 < \tau < \tau_2 \land Terminates(\alpha, \beta, \tau) \right] \]
(\text{DEF1})
**Fig. 3.** Event Calculus Meta-rules
holding (EC3). Finally, the rule DEF1 says that the fluent \( \beta \) is clipped between \( \tau_1 \) and \( \tau_2 \) if and only if there is an event \( \alpha \) that happens between \( \tau_1 \) and \( \tau_2 \) and the event terminates the fluent \( \beta \). Following Shanahan, we assume that all variables are universally quantified except where otherwise shown.
We again follow Shanahan in adopting the common sense law of inertia, meaning that fluents do not change value unless something happens to cause this. That is, fluents change only in accordance with the meta-rules EC1, EC2 and EC3.
3 Formalising Feature Specifications
We now address the derivation of feature specifications to meet the requirements in Fig. 1 and Fig. 2. In Section 3.1, we formalize our requirements and the description of the window domain by translating them into the language of the Event Calculus described in the previous section. We then derive feature specifications in Section 3.2 by refining our requirements using the window domain semantics. In this way, we are establishing the argument for the frame concern.
3.1 Formalizing Requirements and Domains
The natural language specifications of SR and TR, described in Section 2.1, can be formalized as follow:
\[ HoldsAt(\text{IsIn}(t, NBegin, \text{NEnd}), t) \rightarrow HoldsAt(\text{WindowShut}, t) \]
(SR)
\[ HoldsAt(\text{InTemp} > \text{NiceTemp} + 1, t) \land HoldsAt(\text{InTemp} > \text{OutTemp} + 1, t) \rightarrow HoldsAt(\text{WindowOpen}, t) \]
(TR)
The definition of SR says that if the current time is in the range of \text{NBegin} and \text{NEnd}, the machine should make sure that the window is shut. The definition of TR says that if the required temperature is lower than the temperature indoors
Composing Features by Managing Inconsistent Requirements
\[ \text{Initiates}(\text{tiltOut}, \text{tiltingOut}, \tau) \quad (D1) \]
\[ \text{Trajectory}(\text{tiltingOut}, \tau, \text{WindowOpen}, \text{sureopentime}) \quad (D2) \]
\[ \text{Initiates}(\text{tiltIn}, \text{tiltingIn}, \tau) \quad (D3) \]
\[ \text{Trajectory}(\text{tiltingIn}, \tau, \text{WindowShut}, \text{sureshuttime}) \quad (D4) \]
\[ \text{Terminates}(\text{tiltOut}, \text{tiltingIn}, \tau) \quad (D5) \]
\[ \text{Terminates}(\text{tiltOut}, \text{WindowShut}, \tau) \quad (D6) \]
\[ \text{Terminates}(\text{tiltIn}, \text{tiltingOut}, \tau) \quad (D7) \]
\[ \text{Terminates}(\text{tiltIn}, \text{WindowOpen}, \tau) \quad (D8) \]
**Fig. 4.** Domain Descriptions in EC
by more than one unit, and the outside temperature is lower than the temperature indoors by more than one unit, the machine should make the window fully open.
The natural language specification of the window, described in Section 2.1, can be formalized as shown in Fig. 4. In other words, if the window is tilted out, it starts tilting out (D1) until the window is fully open (D2) or the window is tilted in (D7). Similarly, if the window is tilted in, it starts tilting in (D3) until the window is fully shut (D4) or the window is tilted out (D5). When the window is tilted out, it is no longer shut (D6) and when it is tilted in, it is no longer open (D8).
### 3.2 Deriving Feature Specifications
The Event Calculus provides three options for dealing with a fluent expressed using HoldsAt – namely, EC1, EC2 and EC3. Since no window events shuts or opens the window instantaneously, the feature specification based on EC2 does not apply. We, therefore, focus on EC1 and EC3 only.
We begin with a refinement based on EC1 which deals with the case where the window was initially shut and nothing has changed. In our refinement, ‘initially’ or time point 0 means the time at which the system containing all composed features is turned on.
(State the requirement)
\[ \text{HoldsAt}(\text{IsIn}(t, \text{NBegin}, \text{NEnd}), t) \rightarrow \text{HoldsAt}(\text{WindowShut}, t) \]
(Refine the conclusion by applying EC1)
\[ \text{Initially}(\text{WindowShut}) \land \neg\text{Clipped}(0, \text{WindowShut}, t) \]
(Apply DEF1 to the second sub-clause)
\[ \text{Initially}(\text{WindowShut}) \land \neg\exists a_1, t_1 \cdot \text{Happens}(a_1, t_1) \land \text{Terminates}(a_1, \text{WindowShut}, t_1) \land 0 < t_1 < t \]
(Unify the Terminate sub-clause with D6)
\[
\text{Initially}(Window\text{-}Shut) \land \neg \exists t_1 \cdot \text{Happens}(\text{tiltOut}, t_1) \land \\
\text{Terminates}(\text{tiltOut}, Window\text{-}Shut, t_1) \land 0 < t_1 < t
\]
(Remove the Terminate sub-clause because it is an axiom)
\[
\text{Initially}(Window\text{-}Shut) \land \neg \exists t_1 \cdot \text{Happens}(\text{tiltOut}, t_1) \land 0 < t_1 < t
\]
At this stage, we have a sub-clause whose role is to prevent a certain event happening over a given time period. In order to simplify our feature specifications, we introduce into our Event Calculus the new predicate, \text{Prohibit}(\alpha, \tau_1, \tau_2), with the meaning that the event \(\alpha\) should not occur between times \(\tau_1\) and \(\tau_2\).
More formally,
\[
\text{Prohibit}(\alpha, \tau_1, \tau_2) \equiv \neg \exists \alpha, \tau \cdot \text{Happens}(\alpha, \tau) \land \tau_1 < \tau < \tau_2
\]
The refinement can then be completed to give the following partial specification for \(SR\).
\[
\begin{align*}
\text{HoldsAt}(\text{IsIn}(t, N\text{Begin, NEnd}), t) & \rightarrow \\
\text{Initially}(Window\text{-}Shut) \land \text{Prohibit}(\text{tiltOut}, 0, t)
\end{align*}
\]
This partial specification (SFa) says that if the window is shut initially (time 0), the system should prohibit the \text{tiltOut} event from time 0 until time \(t\) in order to keep the window shut at time \(t\).
The second refinement based on \text{EC3} deals with the significant case where the machine needs to tilt in the window sufficiently before the night falls (SFb). For space reasons, we only show the refinement results.
\[
\begin{align*}
\text{HoldsAt}(\text{IsIn}(t, N\text{Begin, NEnd}), t) & \rightarrow \\
\text{Happens}(\text{tiltIn}, t_1) \land t_2 = t_1 + \text{suffshuttime} \land \\
t_1 < t_2 \leq t \land \text{Prohibit}(\text{tiltOut}, t_1, t)
\end{align*}
\]
The specification ensures that the window is shut when the night falls and remains shut during the night. Since the window is robust in its response to, for instance, the \text{tiltIn} event when it is already shut (it remains shut), or when it is already tilting in (it keeps tilting in), these cases are covered by SFb. Therefore, we obtain the full specification for the security feature from a disjunction of the conclusions in SFa and SFb as shown below:
\[
\begin{align*}
\text{HoldsAt}(\text{IsIn}(t, N\text{Begin, NEnd}), t) & \rightarrow \\
((\text{Initially}(Window\text{-}Shut) \land \text{Prohibit}(\text{tiltOut}, 0, t)) \\
\lor (\text{Happens}(\text{tiltIn}, t_1) \land t_2 = t_1 + \text{suffshuttime} \land \\
t_1 < t_2 \leq t \land \text{Prohibit}(\text{tiltOut}, t_1, t)))
\end{align*}
\]
Applying the same refinement technique, two partial specifications for \(TR\) are derived. The first partial specification deals with the case where the window was initially open and nothing has changed, whilst the second partial specification
deals with the significant case where the machine needs to tilt out the window sufficiently before the temperature difference becomes large.
Again from these two partial specifications, we obtain the following full specification for the climate control and energy efficiency feature.
\[
\text{HoldsAt}(\text{InTemp} > \text{NiceTemp} + 1, t) \land \\
\text{HoldsAt}(\text{InTemp} > \text{OutTemp} + 1, t) \rightarrow \\
((\text{Initially}(\text{WindowOpen}) \land \text{Prohibit}(\text{tiltIn}, 0, t)) \lor \\
(\text{Happens}(\text{tiltOut}, t1) \land t2 = t1 + \text{sufOpentime} \land \\
t1 < t2 \leq t \land \text{Prohibit}(\text{tiltIn}, t1, t)))
\]
\[
(\text{CCF})
\]
4 Composing Features
Having derived the specifications for individual features, we now turn to the question of how to compose requirements and feature specifications, using Composition Frames. Since, as Section 2.1 argued, the requirements of the features are not fully consistent, it is not possible to meet the conjunction of \textit{SR} and \textit{TR} requirements completely. We will see that the use of Event Calculus in deriving feature specifications in Section 3 and the introduction of the \textit{Prohibit}(\alpha, \tau1, \tau2) predicate in particular, now give us a more succinct approach to reasoning about the composition controller semantics that we require. Using a family of weakened conjunction operators adapted from [19], we formulate the following ways of combining two general requirements \textit{R1} and \textit{R2}, expressed in terms of control on domains. For the window example, \textit{R1} and \textit{R2} can be regarded as \textit{SR} and \textit{TR} respectively.
\begin{itemize}
\item \textbf{Option 1: No Control.} Let \textit{R1} \land \{\text{any}\} \textit{R2} be the requirement that \textit{R1} and \textit{R2} should each be met at times when they are not in conflict; but there is no requirement that any conflicts should be resolved and if there are times when conflicts occur, any emergent behaviour is acceptable. For example, the window might sometimes oscillate in a partly open position.
\item \textbf{Option 2: Exclusion.} Let \textit{R1} \land \{\text{control}\} \textit{R2} be the requirement that both \textit{R1} and \textit{R2} should hold at all times except when the system is actively attempting to satisfy \textit{R1}, \textit{R2} may not be satisfied during that time; and vice versa. The exclusion here is symmetrical. For example, \textit{SR} might not be satisfied while \textit{TR} is keeping the window open, and \textit{TR} might not be satisfied while \textit{SR} is keeping the window shut.
\item \textbf{Option 3: Exclusion with Priority.} Let \textit{R1} \land \{\text{R1}\} \textit{R2} be the requirement that both \textit{R1} and \textit{R2} should hold at all times except when the system is attempting to satisfy \textit{R1}, \textit{R2} may not be satisfied during that time. The exclusion here is asymmetrical in favor of \textit{R1}.
\item \textbf{Option 4: Exclusion & Fine Grain Priority.} Let \textit{R1} \land \{\text{important,R1}\} \textit{R2} be the requirement that \textit{R1} \land \{\text{R1}\} \textit{R2} holds, except that any sub-requirement associated with the phenomenon \textit{important} should be given top priority.
\end{itemize}
Fig. 5 shows how \textit{SR} and \textit{TR} may be recomposed with the Composition Frame. This diagram is a product of a simple syntactic transformation involving
two steps. First, we introduced a new machine, the Composition Controller, between the machine domain Security Feature (SF) and the world domains (Time Panel and Window) in Fig. 1. The original machine domain (SF) became a world domain in the new diagram, and the phenomena a and b were split by insertion of the new machine. Now, Time Panel, for example, reports to the new machine (phenomena a prefixed by the Time Panel domain TiP) and the new machine may pass it on to the SF domain (phenomena a' prefixed by the composition controller CC). The same transformation was also applied to the problem diagram in Fig. 2. Second, the resulting two diagrams were merged to give the diagram in Fig. 5.
We also added the Prohibit(α, τ1, τ2) events to the phenomena b' and b". These prohibit events will be generated on the basis of the Prohibit(α, τ1, τ2) predicates in our feature specification. The composition controller will interpret them, possibly acting on them and possibly ignoring them, in order to resolve conflicts.
We will now specify four versions of the composition controller in Fig. 5 that meet the composition requirement RC as described by each of the conjunction operators (Options 1-4). To choose a resolution of the requirement conflict between SR and TR is to choose the appropriate composition controller.
Composition Controller for SR ∧\{any\} TR. The semantics of the first type of composition operator is straightforward. We use a simple formalism to describe the semantics of the controller in which → should be read as stating that the composition controller generates the event on the right when the event on the left happens.
Definitions (1 to 4) in Fig. 6 say that the events from Time Panel, Temperature Panel, Out Temp Sensor and In Temp Sensor are passed to the SF and
Fig. 6. The semantics of SR $\land \{\text{any}\} \ TR$
CCF domains respectively without prohibition. Similarly in (5 and 6) the events from SF and CCF are propagated to the window without prohibition. That is, all of the prohibit events transmitted in the interfaces $b'$ and $b''$ to the composition controller are ignored. Since the controller applies no prohibition on events generated by the domains, in particular by SF and CCF domains, any emergent behaviour of the window is possible. For example, if SF has generated tiltIn to shut the window, and as a result the window is closing, and in the mean time the CCF domain generates the tiltOut event to open the window, the composition controller will allow CCF to open the window.
In order to address the other composition operators, it is necessary for the composition controller to remember and act on some of the prohibit events it has received. For this purpose, an additional, but quite minimal, machinery is required. Let $P$ be a set that hold tuples of form $(e, t_1, t_2, m)$ which will represent an assertion that event $e$ is prohibited by the specification of machine $m$ between times $t_1$ and $t_2$. We now allow the $\rightarrow$ to be guarded by an optional predicate (enclosed in square brackets following the first operand). In the following specifications for composition controller, we assume that no machine can prohibit another machine issuing a prohibit event.
Composition Controller for SR $\land \{\text{control}\} \ TR$. The controller semantics for dealing with events generated by world domains (1 to 4) applies to this controller. Definitions (5.a to 5.d) and (6.a to 6.d) replace (5) and (6) respectively. Note that $t$ in the expression $t_1 \leq t \leq t_2$ in Fig. 7 denotes current time.
Definitions 1 to 4 and the following:
\[
\begin{align*}
\text{b'}:\text{prohibit} & \rightarrow \text{insert}((e, t_1, t_2, 'SF'), P) \\
\text{b'}:e & [\forall t_1, t_2, m \cdot t_1 \leq t \leq t_2 \land m \neq 'SF' \land (e, t_1, t_2, m) \notin P] \rightarrow \text{b'e} \\
\text{b'}:e & [\exists t_1, t_2, m \cdot t_1 \leq t \leq t_2 \land m \neq 'SF' \land (e, t_1, t_2, m) \in P] \rightarrow \text{ignore} \\
\text{b'}:e & [\forall t_1, t_2, m \cdot t_1 \leq t \leq t_2 \land m = 'SF' \land (e, t_1, t_2, m) \in P] \rightarrow \text{error} \\
\text{b''}:\text{prohibit} & \rightarrow \text{insert}((e, t_1, t_2, 'CCF'), P) \\
\text{b''}:e & [\forall t_1, t_2, m \cdot t_1 \leq t \leq t_2 \land m \neq 'CCF' \land (e, t_1, t_2, m) \notin P] \rightarrow \text{b'e} \\
\text{b''}:e & [\exists t_1, t_2, m \cdot t_1 \leq t \leq t_2 \land m \neq 'CCF' \land (e, t_1, t_2, m) \in P] \rightarrow \text{ignore} \\
\text{b''}:e & [\forall t_1, t_2, m \cdot t_1 \leq t \leq t_2 \land m = 'CCF' \land (e, t_1, t_2, m) \in P] \rightarrow \text{error}
\end{align*}
\]
Fig. 7. The semantics of SR $\land \{\text{control}\} \ TR$
Controller semantics (5.a) says that when the domain SF issues a prohibition on the event e between t1 and t2, the composition controller records the assertion by adding a tuple into P. When SF issues any other event, the controller passes on the event to the window domain, only if the event has not been prohibited by another machine for that time (5.b); otherwise the event is ignored (5.c). If self-prohibitions happen, an error is generated, (5.d). (6.a to 6.d) describes the controller dealing with the events from CCF in a similar fashion. In effect, this controller gives to the SF and CCF domains mutually exclusive control of the window domain over a period of time.
**Composition Controller for SR \( \land_{\{SR\}} \) TR.** The semantics of this controller differs from the previous one in one respect: since events from the prioritized machine SF should not be prohibited, (5.b to 5.d) are not necessary. (5.a) is needed in order that SF can prohibit events and (5) is added in order that SF events are passed on to the window domain unprohibited, thus giving SF events precedence over events from CCF. CCF events are handled in the same way as before (6.a to 6.d).
**Composition Controller for SR \( \land_{\{emgOpenWindow,SR\}} \) TR.** Assume that SF and CCF can open the window in emergency situations (for example, if a fire is detected in the house) by firing the emgOpenWindow event. Again, the semantics of this controller differs from the previous in one respect: since the prioritized event, emgOpenWindow, from the CCF machine should not be prohibited, (6.e) is added. (5) already allows the emgOpenWindow event from the SF machine to pass unprohibited.
\[
\text{b}^{+} : \text{emgOpenWindow} \rightarrow \text{b} : e \quad (6.e)
\]
It is easy to see that there is nothing in the above composition controller semantics that refers directly to the machine specifications or requirements of the sub-problems. If we treat Fig. 5 as a composition pattern, then the controller we have specified is actually generic, and can be applied to any requirements R1 and R2 that can be specified using the Event Calculus of Section 2.2.
### 5 Related Work
Our work is related, first and foremost, to the feature interaction problem, common in the field of telecommunications [16,27], as well as other domains such as email [13]. In particular it is found in application domains where feature interactions are manifest in the environment rather than inside the software [17]. While less ambitious about the extent to which requirements can be composed, our work is also less domain-specific. In [28], work is presented on the conjunction of specifications as composition in a way that addresses multiple specification languages, but the emphasis is less on the relationship between requirements and specifications. Nakamura et al [21] propose an object-oriented approach to
detecting feature interactions in services of home appliances. However, their approach uses a design-time, rather than run-time, technique.
The whole area of inconsistency management offers a variety of contributions to dealing with inconsistencies in specifications [9,10,11]. Robinson [24], in particular, reviews a variety of techniques for requirements interaction management and Nuseibeh et al [23] discuss a range of ways of acting in the presence of inconsistency. None of these approaches address the decomposition and recomposition of requirements to facilitate problem solving.
A number of formal approaches exist where emergent behaviours due to composition can be identified and controlled [1,7]. Our approach differs from these in that we identify how requirements interact and remove non-deterministic behaviour by imposing priorities over the requirements set.
In [8], a run-time technique for monitoring requirements satisfaction is presented. This approach is taken further in [6], where requirements are monitored for violations and system behaviour dynamically adapted, whilst making acceptable changes to the requirements to meet higher-level goals. This requires that alternative system designs be represented at run-time. One view of our approach is that it involves the monitoring of when a requirement leads to a machine taking control (including event prohibition) and the taking of appropriate action. Our approach differs further, in that it is more lightweight: we do not need to maintain alternative system designs at run-time.
In [15] we sketched some options in composing a sluice gate control machine with a safety machine in order to address safety concerns. That was in the context of a more philosophical discussion of composition and decomposition. The work presented in this paper differs in that we embody the composition as a separate extra machine. This gives us the potential to deal with a wider range of compositions.
The Event Calculus has previously been used in software development for reasoning about evolving specifications [5,25] and distributed systems policy specifications [2]. Our work should be seen as complementary to such approaches in that it will allow inconsistencies to be resolved at run-time.
Finally, our approach is strongly related to the mutual exclusion problem of concurrent resource usage, but with an explicit emphasis on requirements satisfaction.
6 Discussion
In solution space terms composition controllers correspond to the notion of an architectural connector [1]. This allows us to move backwards and forwards between architectural and requirements perspectives using the Composition Frame as a reasoning tool.
We now consider how our work can be generalized, alternative composition semantics and the significance of the work.
It is well understood that in producing a machine to solve a real-world problem there is often a need to implement an analogic model [14] of at least part of
the problem domain. Arriving at a conceptual model that can subsequently be implemented is often difficult in itself. In the case of the SF and CCF machines, the models are very simple. This is partly because of the domain assumption that the window is robust. If the window is less robust, it is necessary to explicitly model the position of the window. Composing machines containing such models can be complex because the model in one machine may become inconsistent with the world, due to the world being changed by another machine.
It is not difficult to see how the Composition Frame can be generalized to any two machines with a common domain under their control. In the specification we used the notion of a particular machine being in control of the window, including passive partial control specified using the Prohibit(α, τ₁, τ₂) predicate. The same technique should be usable with any two machines.
Although our Composition Frame in this example deals with two problems fitting a type of problems called the Required Behaviour Frame, it is easy to see that it would generalize to composing two problems fitting other basic Problem Frames [14] in a similar fashion. For example, in [20] we demonstrate how to compose two problems fitting the Required Behaviour and Commanded Behaviour frames.
Whilst much work has been done on protocols for controlling mutual access to resources in program code, less attention seems to have been paid to the problem of systematically gaining control over domains in the real world [14]. Working explicitly with the notion of a machine being in control at certain times, and the use of a temporal semantics, allows us to express the concerns at the requirements stage. In particular, our requirements composition operators make the issue of control explicit.
7 Conclusions and Future Work
We have shown how by expressing requirements and domain properties in a temporal logic we can formally derive feature specifications. In itself this refinement style approach is not new. However, we have placed it in the context of a development process based on Problem Frames. The value of this is that in making the properties of the application domain explicit, we increase our confidence that the specified machine will meet the system requirements. Furthermore, by adding the Prohibit(α, τ₁, τ₂) predicate to the Event Calculus and making use of it in machine specifications we have obtained an important new element in our toolbox for composing solutions to feature subproblems. The composition controller needs only to be parameterized and the composition is done non-intrusively in the sense that we have made no changes to the specifications of the machines being composed. We have illustrated this through the application of our approach to an awning window control system in a smart home application.
We have also shown how to combine two inconsistent requirements in terms of the operators given in Section 4. The Composition Frame allowed us to reason about the relationship between sub-solutions and sub-requirements. We were
able to specify composition at a requirements level rather than solely in design or implementation terms.
We believe that our approach is scalable, as composition controllers have a simple semantics. Although the specification is in terms of set operations, it would be simple to bound the size of these sets in practice and to implement them efficiently.
Future work is planned to formalize the relationship between our requirements composition operators, the Problem Frames for sub-problems, and the composition requirements. We also need to address a wider range of compositions, both in terms of the options in Section 4 and across a larger set of basic Problem Frames. In a large Problem Frames development, sub-parts of domains and amalgamations of domains can appear in different frames. Related to this is the need to apply the approach to more significant case studies. It might be possible to develop patterns for particular domain areas. Given the use of formal derivations of machine specifications, we are developing a reasoning tool to automate our approach in order to support its use in larger systems.
8 Acknowledgements
We are grateful for the support of our colleagues at The Open University, in particular, Arosha Bandara, Leonor Barroca, Charles Haley, Jon Hall, Lucia Rapanotti and Michel Wermelinger, and Alexandra Russo of Imperial College. We also acknowledge the financial support of EPSRC for this research.
References
|
{"Source-Url": "http://oro.open.ac.uk/15299/1/ComposingFeatures_LTJN_cameraready.pdf", "len_cl100k_base": 9647, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 49620, "total-output-tokens": 12113, "length": "2e13", "weborganizer": {"__label__adult": 0.0002982616424560547, "__label__art_design": 0.0005598068237304688, "__label__crime_law": 0.000324249267578125, "__label__education_jobs": 0.0012712478637695312, "__label__entertainment": 9.053945541381836e-05, "__label__fashion_beauty": 0.00017392635345458984, "__label__finance_business": 0.000335693359375, "__label__food_dining": 0.0003523826599121094, "__label__games": 0.0005373954772949219, "__label__hardware": 0.0007090568542480469, "__label__health": 0.0005373954772949219, "__label__history": 0.0002503395080566406, "__label__home_hobbies": 0.00010919570922851562, "__label__industrial": 0.00043272972106933594, "__label__literature": 0.0004627704620361328, "__label__politics": 0.0002663135528564453, "__label__religion": 0.00041365623474121094, "__label__science_tech": 0.048614501953125, "__label__social_life": 0.00011241436004638672, "__label__software": 0.0079498291015625, "__label__software_dev": 0.935546875, "__label__sports_fitness": 0.0002484321594238281, "__label__transportation": 0.0004372596740722656, "__label__travel": 0.0001819133758544922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44584, 0.02379]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44584, 0.23463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44584, 0.87346]], "google_gemma-3-12b-it_contains_pii": [[0, 833, false], [833, 3301, null], [3301, 6297, null], [6297, 8201, null], [8201, 10380, null], [10380, 13249, null], [13249, 15906, null], [15906, 18382, null], [18382, 21346, null], [21346, 24833, null], [24833, 26636, null], [26636, 29541, null], [29541, 32430, null], [32430, 35404, null], [35404, 38484, null], [38484, 41400, null], [41400, 44584, null]], "google_gemma-3-12b-it_is_public_document": [[0, 833, true], [833, 3301, null], [3301, 6297, null], [6297, 8201, null], [8201, 10380, null], [10380, 13249, null], [13249, 15906, null], [15906, 18382, null], [18382, 21346, null], [21346, 24833, null], [24833, 26636, null], [26636, 29541, null], [29541, 32430, null], [32430, 35404, null], [35404, 38484, null], [38484, 41400, null], [41400, 44584, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44584, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44584, null]], "pdf_page_numbers": [[0, 833, 1], [833, 3301, 2], [3301, 6297, 3], [6297, 8201, 4], [8201, 10380, 5], [10380, 13249, 6], [13249, 15906, 7], [15906, 18382, 8], [18382, 21346, 9], [21346, 24833, 10], [24833, 26636, 11], [26636, 29541, 12], [29541, 32430, 13], [32430, 35404, 14], [35404, 38484, 15], [38484, 41400, 16], [41400, 44584, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44584, 0.04132]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9b13bb38a0cca7c5178123b4a8c022028346a45d
|
(8) DATES
SAS has numerous informats for reading dates and formats for displaying dates. Dates can be read with either numeric, character, or date informats. If you read a date as a character variable, you will not be able to use it in any calculations, e.g. to calculate the number of days that separate two given dates. If you read a date with a numeric format, SAS will store the date as a number, but not as a numeric value that can be used to calculate time differences. However, if you use one of the SAS date informats, the date is stored as a numeric variable that can be used in calculations.
No matter how a date might appear in a raw data file, there probably is a SAS informat that will allow it to be read and stored as a numeric variable. The following shows a series of SAS informats that can be used to read a specified date (March 15, 1994) stored in a number of different forms...
<table>
<thead>
<tr>
<th>RAW DATA</th>
<th>INFORMAT</th>
</tr>
</thead>
<tbody>
<tr>
<td>031594</td>
<td>mmddyy6.</td>
</tr>
<tr>
<td>940315</td>
<td>yymmdd6.</td>
</tr>
<tr>
<td>03/15/94</td>
<td>mmddyy8.</td>
</tr>
<tr>
<td>94/03/15</td>
<td>yymmdd8.</td>
</tr>
<tr>
<td>03 15 94</td>
<td>mmddyy8.</td>
</tr>
<tr>
<td>94 03 15</td>
<td>yymmdd8.</td>
</tr>
<tr>
<td>03151994</td>
<td>mmddyy8.</td>
</tr>
<tr>
<td>15mar94</td>
<td>date7.</td>
</tr>
<tr>
<td>03/15/1994</td>
<td>mmddyy10.</td>
</tr>
<tr>
<td>15mar1994</td>
<td>date9.</td>
</tr>
</tbody>
</table>
Once a date is read with a date informat, SAS stores the date as a number, i.e. the number of days between the date and January 1, 1960 (3/15/1994 is stored as 12492). This permits the use of dates in calculations. For example, imagine you were given a data file that contained a date admission to a hospital and a date of discharge. You might be interested in computing the length stay.
...Example 8.1...
data patients;
informat admit disch mmddyy8.; 1
format admit disch date9.; 2
input admit disch;
los = disch - admit; 3
label
los = 'LENGTH OF STAY'
admit = 'ADMIT DATE'
disch = 'DISCHARGE DATE'
;
datalines;
03/21/90 04/01/90
05/15/90 05/16/90
;
run;
Obs ADMIT DATE DISCHARGE DATE LENGTH OF STAY
1 21MAR1990 01APR1990 11
2 15MAY1990 16MAY1990 1
1 An INFORMAT statement tells SAS to read the variables ADMIT and DISCH with a DATE informat.
2 A FORMAT statement tells SAS to display the two variables as dates, not simply as numeric data.
3 LIST input is used to read the two dates.
4 A new variable is created, LOS, that is the difference in days between DISCH and ADMIT.
5 The output from PROC PRINT shows that the variables DISCH and ADMIT were treated as dates and that a difference in days was calculated correctly.
In addition to using an INFORMAT to tell SAS to treat the variables ADMIT and DISCH as dates, a FORMAT is used to control how the dates will be displayed. Without a format, the dates would be displayed as numbers...
```sas
proc print data=patients label;
* temporarily remove formats from variables;
format _all_; run;
```
<table>
<thead>
<tr>
<th>Obs</th>
<th>ADMIT</th>
<th>DISCHARGE</th>
<th>LENGTH</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>11037</td>
<td>11048</td>
<td>11</td>
</tr>
<tr>
<td>2</td>
<td>11092</td>
<td>11093</td>
<td>1</td>
</tr>
</tbody>
</table>
The numbers shown under ADMIT and DISCHARGE are the number of days each date is from 1/1/1960. The calculation of LOS is still correct.
Just as SAS has a many INFORMATS for reading dates, it also has a wide selection of FORMATS for displaying dates, e.g. March 15, 1994 can be displayed as follows...
<table>
<thead>
<tr>
<th>FORMAT</th>
<th>DISPLAY</th>
</tr>
</thead>
<tbody>
<tr>
<td>mmddyy.</td>
<td>03/15/94</td>
</tr>
<tr>
<td>mmddyy6.</td>
<td>031594</td>
</tr>
<tr>
<td>mmddyy8.</td>
<td>03/15/94</td>
</tr>
<tr>
<td>yymmd.</td>
<td>94-03-15</td>
</tr>
<tr>
<td>yymmd6.</td>
<td>940315</td>
</tr>
<tr>
<td>yymmd8.</td>
<td>94-03-15</td>
</tr>
<tr>
<td>date7.</td>
<td>15MAR94</td>
</tr>
<tr>
<td>date9.</td>
<td>15MAR1994</td>
</tr>
<tr>
<td>weekdate.</td>
<td>TUESDAY, MARCH 15, 1994</td>
</tr>
<tr>
<td>weekdatx.</td>
<td>TUESDAY, 15 MARCH 1994</td>
</tr>
<tr>
<td>worddate.</td>
<td>MARCH 15, 1994</td>
</tr>
</tbody>
</table>
The last three formats will display the leading blanks that are shown, e.g. using WORDDATE. results in five leading blanks. Example 8.1 used the DATE9. format to control the display of the variables DISCH and ADMIT.
In addition to the informats and formats that SAS supplies for reading and writing dates, there are also a number of SAS functions that can be used to work with dates.
```sas
...Example 8.2....
proc format; value dayofwk
1 = "SUNDAY" 2 = "MONDAY" 3 = "TUESDAY"
4 = "WEDNESDAY" 5 = "THURSDAY" 6 = "FRIDAY" 7 = "SATURDAY";
run;
data function;
format dob future now mmddyy10. dob_dow dayofwk.;
dob = '15jul75'd;
dob_dow = weekday(dob);
future = mdy(12,31,2010);
now = today();
agethen = (future - dob) / 365.25;
agenow = (now - dob) / 365.25;
age_then = yrdif(dob,future,'actual');
age_now = yrdif(dob,now,'actual');
label
dob = "BIRTHDATE"
dob_dow = "DAY OF WEEK/BIRTH DATE"
future = "DAY IN/THE FUTURE"
```
now = "TODAY"
agenow = "AGE NOW (OLD WAY)"
age_now = "AGE NOW (YRDIF)"
; run;
proc print data=function label;
run;
<table>
<thead>
<tr>
<th>BIRTHDATE</th>
<th>DAY IN THE FUTURE</th>
<th>TODAY</th>
<th>DAY OF WEEK (BIRTH DATE)</th>
<th>AGE IN THE FUTURE (OLD WAY)</th>
<th>AGE NOW (OLD WAY)</th>
<th>AGE IN THE FUTURE (YRDIF)</th>
<th>AGE NOW (YRDIF)</th>
</tr>
</thead>
<tbody>
<tr>
<td>07/15/1975</td>
<td>12/31/2010</td>
<td>04/02/2001</td>
<td>TUESDAY</td>
<td>35.4634</td>
<td>25.7166</td>
<td>35.4630</td>
<td>25.7151</td>
</tr>
</tbody>
</table>
1 PROC format is used to create a format that will allow the printing of a literal day of the week given a number in the range 1 to 7.
2 FORMATS are assigned to several variables that are dates.
3 A date variable can be created if the date is expressed as shown (day/month/year, with a 3-character literal month) and the date is in quotes followed by the letter d.
4 The day of the week of any given day can be determined via the WEEKDAY function - the function returns a number in the range of 1 to 7 (Sunday-to-Saturday). The number is a numeric variable.
5 A date variable can also be created via the MDY function if the month, day, and year are supplied.
6 The current date can be assigned to a variable using the TODAY() function.
7 New variables (AGETHEN, AGENOW) are created. These are ages in years
8 A SAS function, YRDIF, is used to create new variables similar to those created in step #7. The YRDIF function requires three arguments: the starting date of an interval; the ending date of an interval; the method to be used to calculate the interval. To get a true difference in years, use the word ACTUAL in quotes as shown in the example.
The output from PROC PRINT shows all the variables in the data set. The variables created with the YRDIF function are not too different from the old method of calculating the age in years.
You can use the capability of expressing a date referred to in step #3 in example 8.2 to group observations by date ranges with a user-written format. The next example demonstrates this capability, together with some SAS-supplied formats that can be used to group observations.
...Example 8.3...
proc format;
value interval
'01jan1997'd - '30jun1997'd = '1ST HALF 1997'
'01jul1997'd - '31dec1997'd = '2ND HALF 1997'
'01jan1998'd - '30jun1998'd = '1ST HALF 1998'
'01jul1998'd - '31dec1998'd = '2ND HALF 1998'
; run;
data admits;
input admit1 : mmddyy10. @@;
admit2 = admit1; admit3 = admit1; admit4 = admit1;
admit5 = admit1; admit6 = admit1; admit7 = admit1;
label
admit1 = 'USER-WRITTEN FORMAT'
admit2 = 'YEAR FORMAT'
admit3 = 'MONTH FORMAT'
admit4 = 'MONYY7 FORMAT'
admit5 = 'QTR (QUARTER) FORMAT'
admit6 = 'MONNAME (MONTH NAME) FORMAT'
admit7 = 'DOWNAME (DAY OF WEEK NAME) FORMAT'
;
Introduction to SAS®
Mike Zdeb (402-6479, msz03@albany.edu)
```sas
datalines;
01181998 02111998 02161998 02171998 02271998 03291998 04181998 05081998 05071998 05101998 06031998 08021998 08131998 07241998 08151998 10011998 01081997 01251997 02041997 02171997 03071997 02181997 03161997 03281997 03301997 03271997 04031997 04271997 05311997 06071997 06131997 05311997 05311997 06181997 06161997 06201997 07141997 08071997 07131997 08071997 09051997 ;
run;
proc freq data=admits; 3
table admit1-admit7;
format
admit1 interval.
admit2 year.
admit3 month.
admit4 monyy7.
admit5 qtr.
admit6 monname.
admit7 downame.
;
run;
```
**USER-WRITTEN FORMAT**
<table>
<thead>
<tr>
<th>admit1</th>
<th>Frequency</th>
<th>Percent</th>
<th>Cumulative Frequency</th>
<th>Cumulative Percent</th>
</tr>
</thead>
<tbody>
<tr>
<td>1ST HALF 1997</td>
<td>19</td>
<td>47.50</td>
<td>19</td>
<td>47.50</td>
</tr>
<tr>
<td>2ND HALF 1997</td>
<td>5</td>
<td>12.50</td>
<td>24</td>
<td>60.00</td>
</tr>
<tr>
<td>1ST HALF 1998</td>
<td>11</td>
<td>27.50</td>
<td>35</td>
<td>87.50</td>
</tr>
<tr>
<td>2ND HALF 1998</td>
<td>5</td>
<td>12.50</td>
<td>40</td>
<td>100.00</td>
</tr>
</tbody>
</table>
**YEAR FORMAT**
<table>
<thead>
<tr>
<th>admit2</th>
<th>Frequency</th>
<th>Percent</th>
<th>Cumulative Frequency</th>
<th>Cumulative Percent</th>
</tr>
</thead>
<tbody>
<tr>
<td>1997</td>
<td>24</td>
<td>60.00</td>
<td>24</td>
<td>60.00</td>
</tr>
<tr>
<td>1998</td>
<td>16</td>
<td>40.00</td>
<td>40</td>
<td>100.00</td>
</tr>
</tbody>
</table>
**MONTH FORMAT**
<table>
<thead>
<tr>
<th>admit3</th>
<th>Frequency</th>
<th>Percent</th>
<th>Cumulative Frequency</th>
<th>Cumulative Percent</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3</td>
<td>7.50</td>
<td>3</td>
<td>7.50</td>
</tr>
<tr>
<td>2</td>
<td>7</td>
<td>17.50</td>
<td>10</td>
<td>25.00</td>
</tr>
<tr>
<td>3</td>
<td>6</td>
<td>15.00</td>
<td>16</td>
<td>40.00</td>
</tr>
<tr>
<td>4</td>
<td>3</td>
<td>7.50</td>
<td>19</td>
<td>47.50</td>
</tr>
<tr>
<td>5</td>
<td>5</td>
<td>12.50</td>
<td>24</td>
<td>60.00</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
<td>15.00</td>
<td>30</td>
<td>75.00</td>
</tr>
<tr>
<td>7</td>
<td>3</td>
<td>7.50</td>
<td>33</td>
<td>82.50</td>
</tr>
<tr>
<td>8</td>
<td>5</td>
<td>12.50</td>
<td>38</td>
<td>95.00</td>
</tr>
<tr>
<td>9</td>
<td>1</td>
<td>2.50</td>
<td>39</td>
<td>97.50</td>
</tr>
<tr>
<td>10</td>
<td>1</td>
<td>2.50</td>
<td>40</td>
<td>100.00</td>
</tr>
</tbody>
</table>
A format is created that groups dates into four intervals.
2 The data set ADMITS is created with seven variables, admit1-admit7, all having the same value.
3 Various formats are used to group observations based on the value of a variable that contains a date.
Two functions can be used to work with date intervals, INTCK and INTNX. The INTCK function computes the number of intervals between any two given dates, with the interval being either day, week, month, qtr, or year. The INTNX allows you to specify the time interval (same choices as with INTCK), a starting date, and the number of intervals you would like to cross. SAS then returns a date. These functions are not discussed here in any detail.
...YEARCUTOFF
When a year is expressed with only two digits, how does SAS know what century to use when it creates a numeric date value? There is a SAS system option called YEARCUTOFF and its value determines how two digit dates are evaluated. The default value of this option in version 8 is 1920. Any two digit date is assumed to occur AFTER 1920. If that value is changed via an OPTIONS statement, SAS will use the new value.
...Example 8.4...
```plaintext
options yearcutoff=1900;
data test;
format dt weekdate.;
dt = mdy(01,12,10);
label dt = 'YEARCUTOFF 1900';
run;
proc print data=test label noobs;
run;
options yearcutoff=1920;
data test;
format dt weekdate.;
dt = mdy(01,12,10);
label dt = 'YEARCUTOFF 1920';
run;
proc print data=test label noobs;
run;
```
YEARCUTOFF 1900
Wednesday, January 12, 1910
YEARCUTOFF 1920
Tuesday, January 12, 2010
1 The YEARCUTOFF option for two-digit dates is set to 1900. Any two-digit date is considered as occurring on or after 1900.
2 The MDY function is used to create a date variable.
3 The YEARCUTOFF option is changed to 1920. Any two-digit date is considered as occurring on or after 1920.
You can see the difference in how SAS treats the variable DT depending on the value of YEARCUTOFF.
...LONGITUDINAL DATA
There is a feature of a sorted data set that allows you to find the first and/or last observation in a sequence using a data step. You already know that you read a SAS data set with a SET statement. You also know about BY-GROUP processing from working with grouped data. A combination of a SET statement and a BY statement within a data step gives you access to two SAS-created variables.
...Example 8.5...
data manyids;
input id : $2. visit : mmdyy.;
format visit mmdyy8.;
datalines;
01 01/05/89
01 05/18/90
01 11/11/90
01 02/18/91
02 12/25/91
03 01/01/90
03 02/02/91
04 05/15/91
04 08/20/91
04 03/23/92
04 07/05/92;
run;
proc sort data=manyids; by id visit;
run;
data oneid;
set manyids;
by id;
if first.id then output;
run;
proc print data=oneid;
run;
OBS ID VISIT
1 01 01/05/89
2 02 12/25/91
3 03 01/01/90
4 04 05/15/91
1 A data set is created containing two variables, ID and VISIT (a date variable).
2 The data set is sorted by ID and by VISIT (date) within each ID.
3 A new data set is created. The date are read with a SET/BY combination. Using BY ID; creates a new TEMPORARY variable name FIRST.ID that can be used within the data step.
4 The new data set contains only the observation with the first VISIT within each ID.
A new, SAS-created, temporary variable in example 8.5 is FIRST.ID. When you use a SET statement in combination with a BY statement, SAS will create a new variable for each by-variable. There is only one by-variable (ID), so SAS created FIRST.ID. That is not the entire story of SET/BY since SAS also creates a LAST.ID (a two-for-one deal), i.e. for every variable in the BY statement, SAS will create a FIRST.<by-variable> and a LAST.<by-variable>. The variables only exist for the duration of the data step and do not become part of any SAS dataset.
FIRST. and LAST. variables take on only two values, 1 or zero. The following are the values of FIRST.ID and LAST.ID when the sorted version of the dataset MANYIDS is used in the data step shown in example 8.5...
<table>
<thead>
<tr>
<th>FIRST.ID</th>
<th>LAST.ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>01</td>
<td>00</td>
</tr>
<tr>
<td>01</td>
<td>00</td>
</tr>
<tr>
<td>01</td>
<td>01</td>
</tr>
<tr>
<td>02</td>
<td>01</td>
</tr>
<tr>
<td>03</td>
<td>01</td>
</tr>
<tr>
<td>03</td>
<td>01</td>
</tr>
<tr>
<td>04</td>
<td>01</td>
</tr>
<tr>
<td>04</td>
<td>01</td>
</tr>
<tr>
<td>04</td>
<td>00</td>
</tr>
</tbody>
</table>
You can see that the first observation in a given sequence results in a FIRST. value of 1, while the last observation results in a LAST. value of 1. If an observation is both the first and last observation in a sequence (i.e. the only one as with ID 02), both the FIRST. and LAST. values are 1. If an observation is not the first or the last in a sequence, both the FIRST. and LAST. values are 0. When you use a statement such as that in example 8.5...
```sas
if first.id then output;
```
you are asking SAS to evaluate whether the value of the FIRST. variable is 1 or zero. If it is 1, then SAS performs the task. You could write the above statement in a number of different ways. All would result in the same data set being created....
```sas
if first.id eq 1 then output;
if first.id;
if not first.id then delete;
```
How could you change example 8.5 to create a data set with the last VISIT rather than the first?
...
```sas
...Example 8.6...
proc sort data=manyids;
by id descending visit;
run;
data oneid;
set manyids;
by id;
if first.id then output;
run;
proc print data=oneid;
run;
```
<table>
<thead>
<tr>
<th>OBS</th>
<th>ID</th>
<th>VISIT</th>
</tr>
</thead>
<tbody>
<tr>
<td>001</td>
<td>01</td>
<td>02/18/91</td>
</tr>
<tr>
<td>002</td>
<td>02</td>
<td>12/25/91</td>
</tr>
<tr>
<td>003</td>
<td>03</td>
<td>02/02/91</td>
</tr>
<tr>
<td>004</td>
<td>04</td>
<td>07/05/92</td>
</tr>
</tbody>
</table>
Since the data are sorted in descending date order within each ID, the first observation within each ID group is the last VISIT. You could also modify the data step instead of the sort.
Introduction to SAS®
Mike Zdeb (402-6479, msz03@albany.edu) #91
...Example 8.7...
proc sort data=manyids;
by id visit;
run;
data oneid;
set manyids;
by id;
if last.id then output;
run;
proc print data=oneid;
run;
OBS ID VISIT
1 01 02/18/91
2 02 12/25/91
3 03 02/02/91
4 04 07/05/92
What if your task was to determine how long any individual in the dataset MANYIDS had been part of your study, i.e. what is the difference in days between the first and last visits? This can be done in a number different ways. One involves match-merging data sets (that's for later in the semester). The following only requires one sort, one data step, plus the use of RETAIN and DO-END statements.
...Example 8.8...
proc sort data=manyids;
by id visit;
run;
data duration;
retain firstvis;;
set manyids;
by id;
if first.id then firstvis=visit;
if last.id then do;
diffdays = visit - firstvis;
output;
end;
run;
proc print data=duration;
var id firstvis visit diffdays;
format firstvis visit mmddyy6.;
run;
OBS ID FIRSTVIS VISIT DIFFDAYS
1 01 01/05/89 02/18/91 774
2 02 12/25/91 12/25/91 0
3 03 01/01/90 02/02/91 397
4 04 05/15/91 07/05/92 417
The IF FIRST.ID THEN... statement tells SAS to store the value of the variable VISIT as variable FIRSTVIS when the first observation within a given ID is encountered. The RETAIN statement tells SAS to hold onto the value assigned to FIRSTVIS rather than set it back to missing each time SAS cycles back to the top of the data step. Remember, the default behavior of SAS within a data step is to set the value of MOST (not all) variables to missing each time SAS reaches the top of the data step. The RETAIN statement can selectively alter that behavior. The DO-END statement allows you to perform multiple actions. Since the DO-END statement is embedded in an IF-THEN statement, SAS will perform multiple actions if the IF-THEN statement is TRUE. When the observation within an ID group is found, the date of the first visit is subtracted from the date of last visit and an observation is written to the data set by using an OUTPUT statement.
What if in addition to the VISIT data, you also had another variable that measured some characteristic of an individual at each visit, e.g., cholesterol, that you hoped was changing over time as the result of some study intervention. How could you modify the data step in example 8.8 to determine both the difference in days and cholesterol values between first and last visits?
...Example 8.9...
data manyids;
input id : $2. visit : mmddyy8. chol;
format visit mmddyy8.;
datalines;
01 01/05/89 400
01 05/18/90 350
01 11/11/90 305
01 02/18/91 260
02 12/25/91 200
03 01/01/90 387
03 02/02/91 380
04 05/15/91 380
04 08/20/91 370
04 03/23/92 355
04 07/05/92 261
;
run;
proc sort data=manyids;
by id visit;
run;
data twodiffs;
retain firstvis firstcho;
format firstvis mmddyy8.;
set manyids;
by id;
if first.id then do;
firstvis=visit;
firstcho=chol;
end;
if last.id then do;
diffdays = visit - firstvis;
diffchol = chol - firstcho;
output;
end;
run;
proc print data=twodiffs;
var id firstvis visit diffdays firstcho chol diffchol;
format firstvis mmddyy8.;
run;
In example 8.8, the value of the variable VISIT was stored when the first observation within an ID group was read. In example 8.9, both the date (the value of the variable VISIT) and the initial cholesterol reading are stored. When the last observation in an ID group is read, the difference in days between the first and last visit, and the difference in cholesterol can be calculated.
If you read a data set with a combination of SET and BY and use more than one by variable, you create more than one pair of first.<var> and last.<var> variables. Each variable listed in the BY statement results in a pair of first.<var> and last.<var> variables.
...Example 8.10...
data test;
input x y z;
datalines;
1 1 100
1 1 110
1 1 120
2 1 10
3 1 99
1 2 200
1 2 210
3 2 199
1 3 300
1 3 310;
run;
proc sort data=test;
by x y;
run;
data first_last;
set test;
by x y;
firstx = first.x;
firsty = first.y;
lastx = last.x;
lasty = last.y;
run;
proc print data=first_last;
var x y z firstx lastx firsty lasty;
run;
<table>
<thead>
<tr>
<th>Obs</th>
<th>x</th>
<th>y</th>
<th>z</th>
<th>firstx</th>
<th>lastx</th>
<th>firsty</th>
<th>lasty</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
<td>100</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>1</td>
<td>110</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>1</td>
<td>120</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>2</td>
<td>200</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>2</td>
<td>210</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>6</td>
<td>1</td>
<td>3</td>
<td>300</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td>3</td>
<td>310</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>1</td>
<td>10</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>9</td>
<td>3</td>
<td>1</td>
<td>99</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>10</td>
<td>3</td>
<td>2</td>
<td>199</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
1 A data set is created that will be used to show first.<var> and last.<var> values for two by variables, X and Y.
2 SET with a BY statement requires that data be sorted according to the BY variable(s).
3 A SET statement in combination with a BY statement is used to read the data set. There are two BY variables.
4 Since first.<var> and last.<var> variables are temporary (i.e. they only exist during the execution of the data step), new variables are created to hold the their values for subsequent printing.
5 The output from PROC PRINT shows the values of all the first.<var> and last.<var> variables.
If you look at the values of the variable X, you can see that the values of the first.X and last.X are what you have already seen in the previous examples. However, notice that the value of first.Y and last.Y cycle within each value of X.
There are a large number of SAS functions that allow you to work with data WITHIN a single observation. Remember that SAS PROCs work WITHIN variables/ACROSS observations. SAS functions work ACROSS variables/WITHIN observations. A number of SAS functions have already been introduced. In the section on DATES, several functions were shown that could be used to work with date (numeric) variables, e.g. MDY, WEEKDAY, TODAY, YRDFIF. One feature common to all SAS functions is that they are all followed by a set of parentheses that contain zero or more arguments. The arguments (sometimes referred to as parameters) are information needed by the function to produce a result. The TODAY function requires no argument, just the parentheses, to return the value of the current date. The WEEKDAY function requires one argument, a SAS date. The YEARDIF function requires three arguments. Just as SAS variables can be classed as either NUMERIC or CHARACTER, SAS functions can be divided into those that are used with numeric variables and those that are used with character variables. There are only a few functions that can be used with either type of data.
**NUMERIC FUNCTIONS**
SAS numeric functions can be further divided into categories that describe the action of the function. The SAS language manual divides numeric functions into the following categories: arithmetic, date and time (dates are numeric variables), financial, mathematical (or 'more complicated arithmetic'), probability, quantile, random number, simple statistic, special, trigonometric and hyperbolic (no exaggeration), truncation. The following example uses grades for 29 students who took a series of exams.
```sas
...Example 9.1...
data midterm (drop=g1-g29);
input type $7. g1-g29;
sumgr = sum(of g1-g29);
mingr = min(of g1-g29);
maxgr = max(of g1-g29);
meangr = mean(of g1-g29);
medgr = median(of g1-g29);
stdgr = std(of g1-g29);
vargr = var(of g1-g29);
missgr = nmiss(of g1-g29);
nmbgr = n(of g1-g29);
meangr = round(meangr,.1);
stdgr = round(stdgr,.1);
vargr = round(vargr,.1);
label
type = 'TEST'
mnger = 'MINIMUM'
maxgr = 'MAXIMUM'
sumgr = 'SUM'
medgr = 'MEDIAN'
meangr = 'MEAN'
stdgr = 'STANDARD DEVIATION'
vargr = 'VARIANCE'
missgr = '# MISSING'
nmbgr = '# NON-MISSING'
;
datalines;
QUIZ1 6 8 5 10 9 10 8 9 9 7 10 10 10 10 8 10 10 10 10 10 . 10 8 10 10 10 10
QUIZ2 10 10 2 10 2 10 10 10 10 10 10 10 10 10 10 10 10 10 . 10 10 10 10 10 10
MIDTERM 9 2 0 9 10 10 8 9 9 3 9 9 9 10 2 10 9 9 8 10 . 8 10 9 10 8
QUIZ3 23 5 5 20 25 15 25 25 25 18 18 2 18 18 22 20 22 15 18 23 22 20 25 . 20 18 25 25 20 22
;
run;
proc print data=midterm label noobs;
var type nmbgr missgr sumgr meangr stdgr vargr mingr maxgr medgr;
run;
```
INTRODUCTION TO SAS
Mike Zdeb (402-6479, msz03@albany.edu) #95
<table>
<thead>
<tr>
<th>TEST</th>
<th># MISSING</th>
<th># MISSING</th>
<th>SUM</th>
<th>MEAN</th>
<th>STANDARD DEVIATION</th>
<th>VARIANCE</th>
<th>MINIMUM</th>
<th>MAXIMUM</th>
<th>MEDIAN</th>
</tr>
</thead>
<tbody>
<tr>
<td>QUIZ1</td>
<td>28</td>
<td>1</td>
<td>253</td>
<td>9.0</td>
<td>1.5</td>
<td>2.2</td>
<td>5</td>
<td>10</td>
<td>10</td>
</tr>
<tr>
<td>QUIZ2</td>
<td>27</td>
<td>2</td>
<td>254</td>
<td>9.4</td>
<td>2.1</td>
<td>4.6</td>
<td>2</td>
<td>10</td>
<td>10</td>
</tr>
<tr>
<td>MIDTERM</td>
<td>28</td>
<td>1</td>
<td>219</td>
<td>7.8</td>
<td>2.8</td>
<td>7.8</td>
<td>0</td>
<td>10</td>
<td>9</td>
</tr>
<tr>
<td>QUIZ3</td>
<td>25</td>
<td>4</td>
<td>305</td>
<td>12.2</td>
<td>6.1</td>
<td>37.3</td>
<td>2</td>
<td>20</td>
<td>12</td>
</tr>
<tr>
<td>QUIZ4</td>
<td>28</td>
<td>1</td>
<td>538</td>
<td>19.2</td>
<td>6.1</td>
<td>37.6</td>
<td>2</td>
<td>25</td>
<td>20</td>
</tr>
<tr>
<td>FINAL</td>
<td>27</td>
<td>2</td>
<td>568</td>
<td>21.0</td>
<td>3.3</td>
<td>11.2</td>
<td>15</td>
<td>25</td>
<td>22</td>
</tr>
</tbody>
</table>
1. Since only the new values computed in the data step are to be placed in the data set, the values of
the individual grades are dropped.
2. Several arithmetic and statistical functions are used to compute the values of new variables.
3. The ROUND function is used to round the value of three variables to one decimal place.
Naming the grade variables G1 through G29 made it very simple to place the values of the grades as
arguments in the various functions. It might not be obvious, but each of the functions that used grades 1
through 29 ignored all grades with a missing value, just as would have been done if the data had been
rearranged and analyzed with PROC MEANS. Both SAS functions and SAS procedures will ignore
missing values when computing arithmetic or statistical values. The N (and NMISS) function allow you
to determine how many values were used to compute various values. The output from PROC PRINT
shows the mean, standard deviation, and variance of the grades for each question rounded to one
decimal place. These values were stored in the data set in lieu of keeping values with many decimal
places. If the ROUND function had not been used, but the following format placed in the data step or
PROC PRINT...
```
format meangr stdgr vargr 6.1;
```
the same values would have been printed, but the stored values of the variables would still have many
decimal places. The ROUND function actually changes the stored value while a FORMAT would only
affect the appearance of the variable value.
There are several other functions that will alter the stored value of a numeric variable. They differ in the
values that are returned for positive versus negative numbers.
```
...Example 9.2...
data alter;
input x;
ceil_x = ceil(x);
floor_x = floor(x);
int_x = int(x);
round_x = round(x,1.);
y = x;
format y 6.;
label
x = 'ORIGINAL VALUE X'
y = 'FORMATTED VALUE OF X'
ceil_x = 'CEILING'
floor_x = 'FLOOR'
int_x = 'INTEGER'
round_x = 'ROUND';
datalines;
7.5
-7.5
8.5
-8.5
; run;
```
proc print data=alter noobs label;
var x y int_x round_x ceil_x floor_x;
run;
<table>
<thead>
<tr>
<th>ORIGINAL</th>
<th>FORMATTED</th>
</tr>
</thead>
<tbody>
<tr>
<td>VALUE X</td>
<td>VALUE OF X</td>
</tr>
<tr>
<td>7.5</td>
<td>8</td>
</tr>
<tr>
<td>-7.5</td>
<td>-8</td>
</tr>
<tr>
<td>8.5</td>
<td>9</td>
</tr>
<tr>
<td>-8.5</td>
<td>-9</td>
</tr>
</tbody>
</table>
The results of the ceiling and floor functions depend on whether the value of a variable is positive or negative. The integer and round functions work the same regardless of sign. The formatted value of the variable Y is the same as the rounded value, but Y is still stored as 7.5, -7.5, etc. since it is stored with the same values as X (remember, Y=X in the data step).
There are a number of ways to express the arguments required by functions that require the values of a number of numeric variables. In example 9.1, the convention of naming variables <name>1 through <name>N made it easy to place a large number of variable values as a function argument.
...Example 9.3...
data test;
input gradeone gradetwo gradethr;
mean_one = mean (of gradeone gradetwo gradethr);
mean_two = mean (of gradeone--gradethr);
datalines;
80 80 95;
run;
proc print data=test noobs;
run;
gradeone gradetwo gradethr mean_one mean_two
80 80 95 85 85
The name of each variable can be placed in the argument list. If you know the order of the variables in the data set, the convention of specifying a list with two variable names separated by two dashes (- -) can be used. Remember that using <start var>-<end var> depends on the order of variables within the data set. If you are not sure of the order, you can see what is by using PROC CONTENTS and looking at the first column (showing the position of the variable in the data set).
As stated earlier, a function will ignore missing values. What if you use a function and the value of each variable in the argument list is missing? The value returned by the function will also be missing. To avoid this, you can specify the following...
sum_one = sum (of gradeone--gradethr, 0);
Adding a zero to the argument list ensures that the value returned by the function will never be missing, even if the value of each variable in the argument list is missing. Whether you want the function to result in missing or zero is up to you.
|
{"Source-Url": "http://www.albany.edu/~msz03/epi514/notes/dates_functions_old.pdf", "len_cl100k_base": 10044, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33035, "total-output-tokens": 10743, "length": "2e13", "weborganizer": {"__label__adult": 0.0005164146423339844, "__label__art_design": 0.0009083747863769532, "__label__crime_law": 0.0005116462707519531, "__label__education_jobs": 0.037841796875, "__label__entertainment": 0.000331878662109375, "__label__fashion_beauty": 0.0002770423889160156, "__label__finance_business": 0.00185394287109375, "__label__food_dining": 0.0006814002990722656, "__label__games": 0.00160980224609375, "__label__hardware": 0.00194549560546875, "__label__health": 0.00102996826171875, "__label__history": 0.001033782958984375, "__label__home_hobbies": 0.00032138824462890625, "__label__industrial": 0.0012254714965820312, "__label__literature": 0.0008120536804199219, "__label__politics": 0.00046443939208984375, "__label__religion": 0.0006155967712402344, "__label__science_tech": 0.2203369140625, "__label__social_life": 0.0004520416259765625, "__label__software": 0.322021484375, "__label__software_dev": 0.40380859375, "__label__sports_fitness": 0.00033664703369140625, "__label__transportation": 0.0006976127624511719, "__label__travel": 0.00047469139099121094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29061, 0.12315]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29061, 0.51688]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29061, 0.79274]], "google_gemma-3-12b-it_contains_pii": [[0, 2427, false], [2427, 4614, null], [4614, 7484, null], [7484, 9213, null], [9213, 9475, null], [9475, 11178, null], [11178, 13061, null], [13061, 15004, null], [15004, 17220, null], [17220, 18947, null], [18947, 20815, null], [20815, 23733, null], [23733, 26657, null], [26657, 29061, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2427, true], [2427, 4614, null], [4614, 7484, null], [7484, 9213, null], [9213, 9475, null], [9475, 11178, null], [11178, 13061, null], [13061, 15004, null], [15004, 17220, null], [17220, 18947, null], [18947, 20815, null], [20815, 23733, null], [23733, 26657, null], [26657, 29061, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29061, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29061, null]], "pdf_page_numbers": [[0, 2427, 1], [2427, 4614, 2], [4614, 7484, 3], [7484, 9213, 4], [9213, 9475, 5], [9475, 11178, 6], [11178, 13061, 7], [13061, 15004, 8], [15004, 17220, 9], [17220, 18947, 10], [18947, 20815, 11], [20815, 23733, 12], [23733, 26657, 13], [26657, 29061, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29061, 0.17883]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
6db14d4495d751c9d74e4dc189e9183aef1c569a
|
[REMOVED]
|
{"len_cl100k_base": 13066, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 56185, "total-output-tokens": 15788, "length": "2e13", "weborganizer": {"__label__adult": 0.00029397010803222656, "__label__art_design": 0.00034236907958984375, "__label__crime_law": 0.00023818016052246096, "__label__education_jobs": 0.0006151199340820312, "__label__entertainment": 6.240606307983398e-05, "__label__fashion_beauty": 0.0001233816146850586, "__label__finance_business": 0.00017559528350830078, "__label__food_dining": 0.00019693374633789065, "__label__games": 0.0005440711975097656, "__label__hardware": 0.0007262229919433594, "__label__health": 0.00024378299713134768, "__label__history": 0.0001697540283203125, "__label__home_hobbies": 7.11679458618164e-05, "__label__industrial": 0.00023448467254638672, "__label__literature": 0.0002155303955078125, "__label__politics": 0.00014388561248779297, "__label__religion": 0.0002894401550292969, "__label__science_tech": 0.01397705078125, "__label__social_life": 8.088350296020508e-05, "__label__software": 0.0101470947265625, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0001779794692993164, "__label__transportation": 0.0002665519714355469, "__label__travel": 0.00013911724090576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61644, 0.04284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61644, 0.22359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61644, 0.85687]], "google_gemma-3-12b-it_contains_pii": [[0, 4208, false], [4208, 9053, null], [9053, 13181, null], [13181, 16832, null], [16832, 18928, null], [18928, 22010, null], [22010, 22648, null], [22648, 26685, null], [26685, 30030, null], [30030, 32499, null], [32499, 36293, null], [36293, 40303, null], [40303, 42332, null], [42332, 47019, null], [47019, 50891, null], [50891, 55110, null], [55110, 59841, null], [59841, 61294, null], [61294, 61361, null], [61361, 61415, null], [61415, 61528, null], [61528, 61644, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4208, true], [4208, 9053, null], [9053, 13181, null], [13181, 16832, null], [16832, 18928, null], [18928, 22010, null], [22010, 22648, null], [22648, 26685, null], [26685, 30030, null], [30030, 32499, null], [32499, 36293, null], [36293, 40303, null], [40303, 42332, null], [42332, 47019, null], [47019, 50891, null], [50891, 55110, null], [55110, 59841, null], [59841, 61294, null], [61294, 61361, null], [61361, 61415, null], [61415, 61528, null], [61528, 61644, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61644, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61644, null]], "pdf_page_numbers": [[0, 4208, 1], [4208, 9053, 2], [9053, 13181, 3], [13181, 16832, 4], [16832, 18928, 5], [18928, 22010, 6], [22010, 22648, 7], [22648, 26685, 8], [26685, 30030, 9], [30030, 32499, 10], [32499, 36293, 11], [36293, 40303, 12], [40303, 42332, 13], [42332, 47019, 14], [47019, 50891, 15], [50891, 55110, 16], [55110, 59841, 17], [59841, 61294, 18], [61294, 61361, 19], [61361, 61415, 20], [61415, 61528, 21], [61528, 61644, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61644, 0.15319]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
d15a134100661de9e8082c8cb52ea2a99a6656bb
|
[REMOVED]
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01896872/document", "len_cl100k_base": 12639, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 61842, "total-output-tokens": 14485, "length": "2e13", "weborganizer": {"__label__adult": 0.0004973411560058594, "__label__art_design": 0.0008206367492675781, "__label__crime_law": 0.0009164810180664062, "__label__education_jobs": 0.0060577392578125, "__label__entertainment": 0.0001919269561767578, "__label__fashion_beauty": 0.0002949237823486328, "__label__finance_business": 0.0031795501708984375, "__label__food_dining": 0.0005068778991699219, "__label__games": 0.0023479461669921875, "__label__hardware": 0.001369476318359375, "__label__health": 0.0006413459777832031, "__label__history": 0.0008325576782226562, "__label__home_hobbies": 0.00048279762268066406, "__label__industrial": 0.0020542144775390625, "__label__literature": 0.0006399154663085938, "__label__politics": 0.000507354736328125, "__label__religion": 0.000843048095703125, "__label__science_tech": 0.401611328125, "__label__social_life": 0.0002968311309814453, "__label__software": 0.037872314453125, "__label__software_dev": 0.5361328125, "__label__sports_fitness": 0.0004215240478515625, "__label__transportation": 0.0009698867797851562, "__label__travel": 0.0003659725189208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44624, 0.05313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44624, 0.41754]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44624, 0.82233]], "google_gemma-3-12b-it_contains_pii": [[0, 1010, false], [1010, 3947, null], [3947, 7387, null], [7387, 9593, null], [9593, 12966, null], [12966, 15458, null], [15458, 17234, null], [17234, 19733, null], [19733, 22018, null], [22018, 24164, null], [24164, 27617, null], [27617, 30062, null], [30062, 33058, null], [33058, 36143, null], [36143, 39504, null], [39504, 42293, null], [42293, 44624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1010, true], [1010, 3947, null], [3947, 7387, null], [7387, 9593, null], [9593, 12966, null], [12966, 15458, null], [15458, 17234, null], [17234, 19733, null], [19733, 22018, null], [22018, 24164, null], [24164, 27617, null], [27617, 30062, null], [30062, 33058, null], [33058, 36143, null], [36143, 39504, null], [39504, 42293, null], [42293, 44624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44624, null]], "pdf_page_numbers": [[0, 1010, 1], [1010, 3947, 2], [3947, 7387, 3], [7387, 9593, 4], [9593, 12966, 5], [12966, 15458, 6], [15458, 17234, 7], [17234, 19733, 8], [19733, 22018, 9], [22018, 24164, 10], [24164, 27617, 11], [27617, 30062, 12], [30062, 33058, 13], [33058, 36143, 14], [36143, 39504, 15], [39504, 42293, 16], [42293, 44624, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44624, 0.16725]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
4d8f3a0382f863d2cf63f54cb7d04165b8e3c8e3
|
Abstract
We propose new techniques for efficient breadth-first iterative manipulation of ROBDDs. Breadth-first iterative ROBDD manipulation can potentially reduce the total elapsed time by multiple orders of magnitude compared to the conventional depth-first recursive algorithms when the memory requirement exceeds the available physical memory. However, the breadth-first manipulation algorithms proposed so far [5] have had a large enough overhead associated with them to make them impractical. Our techniques are geared towards minimizing the overhead without sacrificing the speed up potential. Experimental results indicate considerable success in that regard.
1 Drawbacks of Conventional DF Recursive ROBDD Manipulation
There is a need today for manipulating ROBDDs with tens to hundreds of millions of nodes which cannot be met by means of conventional depth-first (DF) recursive algorithms. There are two good reasons why DF recursive algorithms have been the algorithms of choice for ROBDD manipulation until now. One is that the recursive formulation for ROBDD manipulation [2] lends itself naturally to a compact depth-first recursive implementation. An outline of such an implementation (from [1]) of the ite \(F, G, H\) operation is illustrated in Figure 1.\(^1\) In addition, the DF recursive paradigm has been exploited \([1]\) to eliminate the temporary creation of redundant ROBDD nodes by performing the isomorphism check on the nodes on the fly - a new node is created only if a node with the same attributes does not already exist (Line 12 in Figure 1). However, the use of DF algorithms has its downside for very large ROBDDs arising from an extremely disorderly memory access pattern [5].
The depth-first approach is characterized by the fact that a new ite computation request with some top-variable can be issued only after the final results of all the previous ite requests with the same top-variable are known. It is apparent from Figure 1 that successive memory accesses correspond to successive nodes on paths in the ROBDDs \(F, G\) and \(H\). Given that a typical node in a large ROBDD generally has a large indegree, it is impossible to ensure that an arbitrary pair of nodes next to each other on some path in an ROBDD are located at contiguous memory addresses or even in the same page.\(^2\) The latency of fetching a page from secondary storage is multiple orders of magnitude greater than fetching a word from main memory. With current technology, a page fetch takes of the order of 10ms. If the process size exceeds the available main memory, the part of the process that is needed immediately can be moved from secondary storage to main memory only at the expense of moving some part of it out from main memory to secondary storage. In the case of ROBDD manipulation, if the ROBDDs that are being traversed are too large to fit in main memory, it is unlikely that the desired node will ever be in main memory. Therefore, each time an ROBDD node is visited, the complete page containing the ROBDD node must be fetched from secondary storage in the worst case. This would cause hundreds of millions of page faults for ROBDDs of the size we are interested in, making it virtually impossible to manipulate/create them using DF algorithms.
2 BF Iterative ROBDD Manipulation
Ochi et al.\([5]\) have proposed that the disorderly memory access pattern can be corrected by the use of breadth-first (BF) iterative algorithms for ROBDD manipulation. Essentially, instead of the ROBDD operations being executed path-by-path, they are executed level-by-level where each level is associated with a specific variable index in the ROBDD. A direct side-effect of the BF approach is that the isomorphism check mentioned above cannot be done on the fly any more and it becomes necessary to temporarily generate redundant nodes. But the consequent overhead incurred by the generation of redundant nodes is small compared to the orders of magnitude of savings in run time resulting from the regular memory access pattern. A major and fundamental drawback of their algorithm is that “pad nodes” need to be added to the ROBDD so that successive nodes on any path in the new BDD differ in their index by exactly 1. Since successive nodes along a path in the original ROBDD can differ in their index by an arbitrary amount, it is likely that a large number of pad nodes may have to be added. We have implemented their algorithm and find that the pad nodes can increase the node count by multiple factors for many circuits. This drawback manifests itself in two ways: (1) Significantly increases the run time since the pad nodes are treated like the original nodes and must be fetched from memory. (2) Considerably limits the size of ROBDDs that can be built given an address space limit. We find that the pad node approach is an impractical solution for manipulating large ROBDDs.
Our contribution has been to propose a BF algorithm that avoids the need for pad nodes. The algorithm achieves this with a negligible penalty in CPU time, and an insignificant perturbation of the regular memory access pattern. Our experiments indicate that for some large industrial circuits with greater than 10K gates for
---
\(^1\)For basic ROBDD related terminology and the recursive formulation of ROBDD manipulation, please refer to \([1]\).
\(^2\)In UNIX memory management, read and writes from secondary storage are always in units of one page. While the size of a page may depend on the environment, a page is usually a 4KB block of memory located at 4KB boundaries in UNIX on current processors.
which our algorithm finishes in about 1 hour of total elapsed time, our faithful implementation of the algorithm of Ochi et al. does not complete because it runs out of more than 1 Giga Bytes of secondary storage. On some circuits for which our algorithm finishes in about 10 minutes, the pad node approach requires many hours. Our algorithm runs faster than the pad node approach by multiple factors consistently for circuits on which both approaches finish. Our approach is machine independent and has been ported with no modifications to SPARC, SGI and NEC EWS based machines.
3 Basic Algorithm for BF ROBDD Manipulation
In this section, we first describe how ROBDD manipulation can be performed in a BF manner, and highlight the requirements (labeled as Problems 1, 2 and 3 in Sections 3.1 and 3.2) that need to be met in order to ensure locality in the memory access pattern. We then present our own solution and compare it to the approach of Ochi et al. [5].
The basic outline of an algorithm for the BF computation of ITE is shown in Figures 2, 3 and 4. The same code with minor modifications can be used to execute in a BF manner any Boolean operation with an arbitrary number of arguments. The first phase of the algorithm, the apply phase, is where the result BDD is created. The essential difference between the DF and BF approaches is that in the BF approach a new ITE computation request with some top-variable is issued before the final results of all previous ITE requests with the same top-variable are known. As a result, isomorphism check cannot be done on the fly, and the result BDD may contain redundant nodes. The second phase of the algorithm, the reduce phase, removes redundant nodes from the BDD and generates the final ROBDD. Let us analyze the memory access patterns generated during apply and reduce.
3.1 Memory Access Pattern During BF Apply
The basic operation during the apply phase (Figure 3) is the top down (from the root variable to the leaves) processing of outstanding requests to compute the ITE of ROBDD triples. In general, two new ITE requests are issued each time an ITE request is processed. The result for a new request is directly available if a terminal case is encountered. Otherwise, a new node is allocated for a new request if an identical request has not already been issued in the past. Processing an ITE request requires that the root node of each of its argument ROBDDs be fetched if the top variable of that ROBDD is the same as the top variable of the argument triple.
The essence of the BF algorithm is that the outstanding ITE requests are processed strictly in increasing order of their top variable indices. This implies that the outstanding ITE requests that have the same top variable index are processed consecutively. In turn, this means that there is temporal locality in the access of the ROBDD nodes corresponding to a given variable index. Now if we can ensure that the ROBDD nodes for each variable index are stored in contiguous locations in memory, the temporal locality translates to spatial locality. This ability to introduce spatial locality in the memory accesses during ROBDD creation is the fundamental reason for using the BF approach.
A leveled request queue enables the processing of outstanding ITE requests in appropriate order. One queue is created per variable index. Each time a new request is generated, it is placed in the appropriate queue corresponding to the top variable of its argument triple. Obviously, a new request can only be placed in a queue with index greater than the current index. The queues themselves are processed in the order of increasing variable index. Two more critical problems must be resolved to ensure the absence of randomness in the access pattern.
3.1.1 The Problem of Computing Variable Indices
Problem (1) One issue is the computation of the top-variable index. Variable index computation represents a problem because the variable index associated with an ROBDD node is normally stored as an entry in the node structure itself. Consequently, the variable index for a node cannot be computed without fetching the node
\[ \text{bf ITE}(F, G, H) \]
\[
1. \text{if (terminal case}(F, G, H)) \{ \\
2. \quad \text{return result; } \\
3. \} \text{ else } \{ \\
4. \quad \text{bf ITE}_\text{apply}(F, G, H); \\
5. \quad R = \text{bf ITE}_\text{reduce}(); \\
6. \quad \text{return } R; \\
7. \} \\
\]
Figure 2: Outline of BF ROBDD Manipulation
\[ \text{bf ITE}_\text{apply}(F, G, H) \]
\[
1. \text{min index} = \text{determine top variable index}(F, G, H); \\
2. R = \text{new bdd node}(\text{min index}); \\
3. \text{create new request}(F, G, H, R); \\
4. \text{add } (F, G, H, R) \text{ to queue}[\text{min index}]; \\
5. \text{for } (\text{index} = \text{min index}; \text{index} \leq \text{max index}; \text{index} + +) \{ \\
6. \quad x = \text{the variable corresponding to index } \text{index}; \\
7. \quad \text{do } \{ \\
8. \quad (F, G, H, R) = \text{fetch next request from queue}[\text{index}]; \\
9. \quad \text{if (terminal case}(F_x, G_x, H_x)) \{ \\
10. \quad R = F = \text{result}; \\
11. \} \text{ else } \{ \\
12. \quad \text{next index} = \text{determine top variable index}(F_x, G_x, H_x); \\
13. \quad \text{if (request corresponding to } (F_x, G_x, H_x) \text{ already occurs in queue}[\text{next index}] \{ \\
14. \quad \text{fetch THEN node corresponding to } (F_x, G_x, H_x) \text{ from queue}[\text{next index}]; \\
15. \quad R = T = \text{THEN}; \\
16. \} \text{ else } \{ \\
17. \quad \text{THEN} = \text{new bdd node}(\text{next index}); \\
18. \quad R = T = \text{THEN}; \\
19. \quad \text{create new request}(F_x, G_x, H_x, \text{THEN}); \\
20. \quad \text{add } (F_x, G_x, H_x, \text{THEN}) \text{ to queue}[\text{next index}]; \\
21. \} \\
22. \} \\
23. \text{if (terminal case}(F_x, G_x, H_x)) \{ \\
24. \quad R = E = \text{result}; \\
25. \} \text{ else } \{ \\
26. \quad \text{next index} = \text{determine top variable index}(F_x, G_x, H_x); \\
27. \quad \text{if (request corresponding to } (F_x, G_x, H_x) \text{ already occurs in queue}[\text{next index}] \{ \\
28. \quad \text{fetch ELSE node corresponding to } (F_x, G_x, H_x) \text{ from queue}[\text{next index}]; \\
29. \quad R = E = \text{ELSE}; \\
30. \} \text{ else } \{ \\
31. \quad \text{ELSE} = \text{new bdd node}(\text{next index}); \\
32. \quad R = E = \text{ELSE}; \\
33. \quad \text{create new request}(F_x, G_x, H_x, \text{ELSE}); \\
34. \quad \text{add } (F_x, G_x, H_x, \text{ELSE}) \text{ to queue}[\text{next index}]; \\
35. \} \\
36. \} \\
37. \} \text{ while queue}[\text{index}] \text{ is not empty}; \\
38. \}
\]
Figure 3: Outline of BF Apply
We follow the convention that the variable index increases from the root to leaves.
Problem (2) The second critical issue to be resolved manifests itself on Lines 13 and 28 in Figure 3. It is concerned with accessing the queue associated with a newly issued request. Before a new request is issued, it must first be checked whether an identical request has been issued in the past. A table lookup in the appropriate queue with the index next\_index is performed for this check. If there is a duplicate request in the queue, it must be fetched. If a duplicate request does not exist, a new request must be issued and inserted into the queue. There is no restriction on next\_index except that it be greater than the current index index.
In addition, there is no relationship between the top-variable indices for successively issued requests. This lack of relationship creates the potential for randomness in the memory access pattern here. Ideally, we would like the lookups into the queues to be done in the order of increasing index.
It is in the solution to the two above stated problems of computing the variable indices and checking for duplicate requests without introducing randomness in the memory accesses that our approach differs from that of Ochi et al. Our approach solves these two problems with a significantly lower penalty in terms of additional memory usage and at the expense of a negligible overhead in CPU time.
3.2 Memory Access Pattern During BF Reduce
The reduce phase (Figure 4) removes redundant nodes from the BDD by doing a bottom-up traversal of the BDD nodes. A redundant BDD node is a node with identical T H E N and ELSE nodes, or a node such that another node with identical attributes already exists in the unique table. The corresponding checks are performed in Lines 11 and 13 in Figure 4. If a node is found to be redundant, it is forwarded to the node that should take its place. In terms of programming, an easy way (also suggested in [5]) to implement the forwarding is the following: Say that R is the node to be forwarded to R'. Set R -> E to some predefined constant, and set R -> T to R'. To determine if a node R has been forwarded, one first checks R -> E for the predefined value.
As in the case of apply, the nodes to be processed are accessed from the levelized queue, but in the order of decreasing variable index. Therefore, if the nodes belonging to the same level are stored in contiguous memory locations, we have spatial locality of address when fetching these nodes. Even so, there is still potential for randomness in the memory access pattern as described below:
3.2.1 The Problem of Checking for Node Forwarding
Problem (3): The first step in the processing of a node, say R, involves checking if R -> T and R -> E have been forwarded (Lines 5 to 10 in Figure 4). If, say, R -> T has been forwarded, then it must be reassigned to the node to which it has been forwarded. Given the way the forwarding of nodes is implemented (as indicated in the previous paragraph), checking if R -> T has been forwarded requires that R -> T be fetched from memory. Since the index for R -> T can be arbitrarily greater than the index for R, and since there is no relationship between the variable indices of two nodes checked for forwarding one after the other, this check introduces randomness in the memory access pattern. This potential for random access must be removed if we don’t want the performance of the algorithm to degrade rapidly once the BDD sizes reach a certain point. Ideally, all the checks for forwarding of nodes belonging to a given level should be done consecutively. As in the case of apply, our solution to this problem differs from the solution of Ochi et al.
3.3 The Pad Node Solution
The common reason that causes the Problems 1, 2 and 3 in Sections 3.1 and 3.2 is that the index of the child node can be arbitrarily greater than the index of the parent. In their solution, Ochi et al. [5] proposed that additional nodes be introduced in the ROBDD until the index of each child node is either exactly equal to one plus the index of its parent node, or the child node is a terminal node. The solution is simple but naive. In effect, the solution potentially increases the memory requirement by multiple factors in order to remove the randomness in memory access. In practice, the increased memory requirement nullifies the advantage of the regular memory access.
4 Our BF Approach
In this section, we demonstrate that orderly page access during BF ROBDD manipulation can be achieved using the basic BF algorithms outlined in Section 3 with a few enhancements and without
the need for pad nodes and associated overheads. The key ideas that make this possible are (1) a new way of determining the variable index of an ROBDD node (2) appropriate sorting of the variables and nodes to be processed at a given level during apply and reduce, respectively. These ideas are described in some detail below.
4.1 Determining the Variable Index of an ROBDD Node
We know from earlier sections that in order to ensure spatial locality in memory accesses, we must ensure that ROBDD nodes with the same top-variable index are stored in contiguous memory locations. To make this possible, the memory manager must be able to allocate memory in the form of appropriately sized blocks with each block being associated with a particular variable index. Memory for a new ROBDD node is allocated from within the block associated with the variable index of the node. An additional block is allocated for a variable index when all previously allocated blocks for that index are filled up.
A key side effect of such an organization of ROBDD nodes in memory is that given the address of a node, one can easily determine the block of memory to which it belongs and thereby also easily determine its variable index. Note that this way, the variable index of the node is determined directly from the address (pointer) of the node. The node itself does not need to be fetched from memory. This ability to determine the variable index without fetching the ROBDD node from memory enables us to solve Problem 1 described in Section 3.1.1, and hence removes the first bottleneck to ensuring orderly page access during BF manipulation.
Of course, this method of computing the variable index is not completely free of overhead. But we show in this section that the overhead is small enough that it can be neglected for all practical purposes. Since the goal of the BF approach is to maximally utilize each page access, a block size of one page (4 KBytes on most current UNIX systems) is used. Note that since we can determine the variable index directly from the address of a node, we do not need the corresponding field in the ROBDD node structure any more. The remaining fields in the node structure are (1) REFERENCE_COUNT (2) THEN (3) ELSE (4) NEXT. The REFERENCE_COUNT field maintains a count of the fanins to the node, the THEN and ELSE fields store pointers to the THEN and ELSE nodes, respectively, and the NEXT field stores a pointer to another node with the same variable index and is used to maintain the unique table as described in [1]. Each of these fields is 4 bytes wide, making the total size of each ROBDD node equal to 16 bytes. A 4 KByte block would therefore, accommodate 256 ROBDD nodes. Therefore, fetching a page from secondary storage puts into main memory 256 ROBDD nodes.
The variable index is computed in the following manner: Given a 32 bit address space (corresponding to 4 GBytes of maximum per process addressable memory as provided by most microprocessors), a 4 KByte block size implies that there can be at most 1 M blocks at any given time. In other words, the higher 20 bits of the address of a node determine the block to which the node belongs. The correspondence between a block and the variable index to which it corresponds is maintained by means of a table (call it the block-index table) with 1 M entries. This table can be located anywhere with the restriction that the 1 M entries be contiguous. A table with only 1 M entries is small enough that it is relatively easy to find room for it. In addition, given its small size and the large number of times that variable indices need to be computed, the table is almost guaranteed to always remain in main memory and never get swapped out to secondary storage. To compute the variable index, the ROBDD node address is first shifted to the right until the 20 bits identifying the block occupy the appropriate positions. These shifted 20 bits are now used as an offset address to index into the block-index table to fetch the variable index. On a typical CPU architecture, the right shift requires one instruction, and adding an offset to the base address requires another instruction. Therefore, our approach requires two instructions in addition to the actual memory fetch.
4.1.1 Overhead of Index Computation
How much more expensive is it to compute the variable index in this manner rather than by a pointer indirection assuming that the appropriate ROBDD node is already in main memory? In the pointer indirection method, if the index field is the first field in the ROBDD node structure, obtaining it would require a memory fetch with no offset computation. In such a case, our method requires two non-memory instructions more than the index determination by indirection. If the index field is not the first field in the node structure, then obtaining it by pointer indirection requires one instruction for adding an offset to the top address of the structure prior to the memory fetch. In this case, our method requires one instruction more than index computation by pointer indirection. Therefore, in the worst case, we need to pay a penalty of only two non-memory instructions to determine the variable index from the block-index table. Given that the latency of a memory fetch is much higher than the latency of a shift or an add instruction, the two additional instructions correspond to a very low real-time penalty in practice. What this means is that in creating small ROBDDs that fit completely in main memory, our BF approach will not be slowed down by our method of computing variable indices compared to the conventional depth-first approach, everything else being equal.
Now consider the penalty of our way of computing the variable index compared to the BF approach using pad nodes. No index computation is required when using the pad node approach. We know that using the basic BF approach outlined in Figures 3 and 4, index computation is required at most 9 times in each iteration of the control loop in the apply phase. The 9 index computations correspond to 20 additional non-memory instructions and 9 additional memory fetches (from the block-index table) which are practically guaranteed to be from main memory. These 27 additional instructions for index computation correspond to an insignificant fraction of the total number of instructions for the rest of the complete loop. This leads us to conclude that even discounting the overhead of using additional nodes for padding, computing the variable indices from the block-index table would result in an insignificant run time penalty compared to the pad node method.
4.2 Sorted Processing of Requests During Apply
The next problem to be resolved is the bottleneck associated with checking for duplicate requests during apply (Problem 2 in Section 3.1.2). Given a queue of requests to process at the current level, our goal is to remove the randomness in page access. In order to achieve this we process the requests at the current level in the order of increasing variable indices of the two new requests that are issued from each of them. This is done in the following manner: In the first pass through the current request queue, requests such that the new requests issued from them belong to the level immediately below are processed immediately. Other requests are stored in an array of lists, with each list corresponding to a level below the current level. After the first pass is complete, the requests in the lists in the newly created array are processed in the order of increasing level. Note that since there is no relationship even between the top-variable indices of the two new requests issued from the same request, a request may appear in two lists at the same time, one for the new request corresponding to the positive cofactor, and the other for the negative cofactor. Processing the requests in this manner ensures that all the look ups into a particular queue are done consecutively, thereby removing the randomness.
The randomness is removed at the cost of doing more than one pass through the current request queue. Even so, the effective number of passes required is only some number between 1 and 2 depending on the number of requests that get processed in the first pass itself. Also, the new array of lists is at most of size equal to the number of levels, and is therefore, very small. The creation of a new list per level does not cost any memory since the requests were already in a list before (the list corresponding to the block-index table).
the queue to which they belong). They just need to be removed from the original list and put in a new list. A request may need to be duplicated if the top-variable indices of the two new requests issued from it (corresponding to the positive and negative cofactors) are different. The duplication is required since the request must be placed simultaneously in the two lists corresponding to the two top-variable indices. In spite of the potential for some duplicate requests, this approach is superior to the pad node approach where, effectively, n copies of a request are created if the actual top-variable index of a newly created request is n levels below the current level.
4.3 Sorted Processing of Nodes During Reduce
The final problem to be resolved is the potential for random page-access during the check for forwarded nodes during reduce (Problem 3 in Section 3.2.1). This problem and the solution to it are analogous to the case of checking for duplicate requests during apply. This problem arises because of the lack of any relationship in the indices of the nodes that are successively checked for forwarding. As in the previous section, we use more than one pass, and maintain an array of lists (one list for each level below the current level) for nodes to be processed in the second pass. \( R \rightarrow T \) (c.f. Section 3.2.1) is processed during the first pass only if the level of \( T \) is immediately below the current level. Otherwise, \( R \) is placed in the list corresponding to the level of \( R \rightarrow T \). Similarly for \( R \rightarrow E \). As in the previous section, \( R \) must effectively be placed in two lists if the levels of \( R \rightarrow T \) and \( R \rightarrow E \) are different. With this approach, all checks for forwarding of nodes belonging to a given level get done consecutively. Therefore, there is no randomness in the page-access pattern.
The costs associated with this solution are similar to the costs associated with avoiding random access when checking for duplicate nodes. Again, the array size is very small and no extra memory is needed for the new lists. The reason no new memory is needed is that the ROBDD node structure already has a NEXT field to be used to maintain the lists in the unique table. Since a node is not placed in the unique table until it has been processed during reduce, the NEXT field can be used to maintain the desired lists.
4.4 Adaptive Garbage Collection
Garbage collection should serve two purposes: (1) free up memory for subsequent use and (2) prevent fragmentation of the memory used by a single ROBDD. We use an adaptive scheme, consisting of a combination of two well known garbage collection schemes, to realize both these goals. Our scheme is described below. A number of Boolean operations must be performed before the ROBDDs for the primary outputs are created. Dead nodes are created as a result of the freeing of these intermediate ROBDDs as well as by the freeing of redundant nodes. The space occupied by the dead nodes can be reclaimed for use by new nodes. An effective scheme for reclaiming the memory used by dead nodes is called the reference-count garbage collection strategy. In this strategy, one maintains a list of nodes (called a FREE_LIST) that can be reused. When a new node is to be allocated, one first checks the FREE_LIST for available nodes. If a dead node is available, it is removed from the FREE_LIST and reused. A new node is allocated if no dead node is available. Given the levelized organization of nodes in memory in our algorithm, we maintain a separate FREE_LIST for each level. In order to be able to identify dead nodes, we maintain a REFERENCE_COUNT field in the node data structure. The REFERENCE_COUNT field maintains the number of nodes that refer to this particular node. A node is considered to be dead if its top-variable indices (top-variables of which are declared dead, the REFERENCE_COUNT fields of its two children must be decremented by one. Therefore, labeling a node to be dead has a potentially cascading effect down the ROBDD. The task of the reference-count garbage collector is to traverse the ROBDDs top-down, and if their REFERENCE_COUNT is zero then mark them dead and decrement the REFERENCE_COUNT fields of their children. As in the case of apply and reduce, we must do this traversal in a levelized manner to avoid random page-access. The same strategy of using levelized queues as in apply and reduce is used for the purpose. The queue at a level consists of the dead nodes at that level and the nodes whose REFERENCE_COUNT is to be decremented. Also, the same strategy of sorted traversal described in Sections 4.2 and 4.3 is used here to ensure complete removal of randomness in page access. The reference-count garbage collector is called at periodic intervals, e.g. every time the number of nodes doubles. The advantage of reference-count is that it is very fast.
A potential problem with the reference-count scheme is that it can lead to a fragmentation of the memory used by a particular ROBDD. Consider the scenario where reference-count frees up 20% of the nodes at a given level. These 20% of the nodes can potentially be scattered over a number of pages allocated for that level. When a new ROBDD is created, it must first use up these dead nodes scattered all over before it can allocate new nodes for that level. Therefore, spatial locality is partially lost for 20% of the nodes at that level in the new ROBDD. In order to avoid such a possibility, we use the reference-count scheme in combination with another scheme called the stop-and-copy garbage collection scheme. The stop-and-copy scheme copies to a new address space only those nodes that are alive. The space that was being used before the copy is then completely discarded. This scheme serves to remove the fragmentation caused by the reference-count scheme. But, since it must copy all the nodes that are alive, it is potentially much slower than reference-count. Therefore, it is called only when reference-count produces a large number of dead nodes. In our implementation, the stop-and-copy scheme is only called for those levels with a large number of dead nodes, and not for all the levels.
A similar adaptive strategy was also used in [5].
4.5 Implementation Details
4.5.1 ITE-Request Data Structure
The ITE request structure requires five fields: the arguments \( F, G, \) and \( H \), the result node \( R \), and the NEXT field used for maintaining lists. If we used a separate type for the ITE requests, we would need to allocate 20 additional bytes for every BDD node allocated during apply. These additional bytes would stay alive until the request is processed. We avoid this penalty by using the BDD node structure for the ITE requests also and overloading the meanings of the various fields. The REFERENCE_COUNT field maps to \( F \), then to \( G \), else to \( H \), NEXT to NEXT, and the BDD node itself is the result field \( R \). The same 16 bytes are, therefore, initially used for the ITE request and subsequently for the BDD node.
4.5.2 Block-Index Table
With an address space of 32 bits and a page size of 4 KB, the block-index table would have at most 1 M entries. This number is small enough that a flat statically allocated table can be constructed easily. That is what we do in our implementation. For future machines with 64 bits of address space, a flat table would be infeasible. One solution would be to make the table hierarchical in nature, allowing the memory used by the table to be increased dynamically. The penalty would be increased overhead since more instructions would be required for index computation.
5 Experimental Results
We wish to illustrate the following points concerning our BF approach in this section: (1) It is orders of magnitude faster than DF implementations when BDD sizes exceed main memory. (2) Its overhead compared to DF implementations for ROBDDs that fit in main memory is manageable. (3) It is superior to the pad node approach in terms of memory requirement and run time. It should be noted that we are not comparing our results to those for ROBDDs here. In fact, good variable orderings exist for all the circuits for which we report results [6]. The comparisons between the BF approach and the other approaches are for a specific ordering. Most of the circuits used for our experiments are from the IWLS ’93 benchmark set. The two circuits Industl and Indust2 are from the industry.
\footnote{Therefore, the requests already have a NEXT field in their structure which can be used to point to the next request in a list}
The overhead of our BF approach compared to the current DF implementations? The elapsed times on a SPARC 10/41 for circuits whose ROBDDs fit in main memory are provided in Table 2. The comparison is with a current generation DF ROBDD package [7], and with our fair implementation of the pad node approach of [5].
**Table 2: Results of our BF Approach for Small ROBDDs**
<table>
<thead>
<tr>
<th>Circuits</th>
<th># I/O/G</th>
<th># Nodes</th>
<th>ET</th>
</tr>
</thead>
<tbody>
<tr>
<td>C1355</td>
<td>6116/125714</td>
<td>175925</td>
<td>13</td>
</tr>
<tr>
<td>C1908</td>
<td>33259880</td>
<td>42929</td>
<td>6.1</td>
</tr>
<tr>
<td>C412</td>
<td>6007160</td>
<td>146834</td>
<td>13</td>
</tr>
<tr>
<td>C699</td>
<td>4132202</td>
<td>55493</td>
<td>10</td>
</tr>
<tr>
<td>C5315</td>
<td>1708127.3290</td>
<td>54912</td>
<td>8</td>
</tr>
<tr>
<td>C980</td>
<td>64026957</td>
<td>369120</td>
<td>1</td>
</tr>
<tr>
<td>s13207</td>
<td>700790807</td>
<td>28594</td>
<td>14</td>
</tr>
<tr>
<td>a1423</td>
<td>917958</td>
<td>55572</td>
<td>3</td>
</tr>
<tr>
<td>s15850</td>
<td>611964597</td>
<td>136330</td>
<td>21</td>
</tr>
<tr>
<td>s25392</td>
<td>1765208401653</td>
<td>28681</td>
<td>61</td>
</tr>
<tr>
<td>s38384</td>
<td>14641738919407</td>
<td>108494</td>
<td>61</td>
</tr>
</tbody>
</table>
**I/O/G:** Primary Inputs/Primary Outputs/Gates; **ET:** Elapsed Time (seconds)
What is the overhead of our BF approach compared to the current DF implementations? The elapsed times on a SPARC 10/41 for circuits whose ROBDDs fit in main memory are provided in Table 2. The comparison is with a current generation DF ROBDD package [7], and with our fair implementation of the pad node approach of [5]. Many of the ROBDDs are too small for the comparison to be meaningful and we only provide comparisons for circuits with greater than 10000 nodes. It can be seen that for these small ROBDDs, our approach is consistently faster than the pad node approach indicating a lower overhead. For some examples, our overhead is markedly smaller than the pad node approach. The numbers also demonstrate that our approach does have an overhead compared to the DF approach for these small ROBDDs, but that the overhead is not inordinately large unlike the pad node approach. Given that and the fact that the absolute elapsed times involved for these small ROBDDs are of the order of tens of seconds, we feel that the overhead of our approach for small ROBDDs is a reasonable penalty to pay for optimizing the times for large ROBDDs.
Elapsed times for large ROBDDs are provided in Table 1. The numbers clearly demonstrate the ability of our BF approach to build and manipulate very large ROBDDs in short run times. For example, the 3.85 million node ROBDDs for s9234 were built on a SPARC 2 with only 32 MB of main memory in about 25 minutes. To make the same ROBDDs using the current generation ROBDD package from CMU[3] requires about 48 hours. That corresponds to a speed up by a factor of 120. Similarly, we can build ROBDDs with about 104 million nodes for the first 7509 gates of s38417 in 26 hours on a SPARC 10/41 with 64 MB. The CMU ROBDD package can only build ROBDDs with 7.8 million nodes for the first 4807 gates in 43 hours on the same machine. The BF approach is faster by a factor of about 40 for the first 4807 gates. Similar speed up is obtained consistently when the ROBDD sizes are much greater than the available main memory. For our experiments, the orderings were generated using the ordering algorithm of [4] implemented in sis.
Finally, as in the case of the small ROBDDs, our approach consistently outperforms the pad node approach for large ROBDDs also. For example, the pad node approach quickly exhausted the 2 GB of available swap space for Indust1 and Indust2 as a result of the pad node overhead. 51 times more nodes were required to pad the ROBDDs for Indust1 before exhausting the swap space. Such overheads are very common using the pad node approach for random-logic circuits. In addition, whenever the pad node approaches manages to build the ROBDDs for a circuit, it is much slower than our approach. For example, our approach requires 21 minutes on a 64MB sparc 10/41 while the pad node approach requires more than 3.5 hours.
**6 Acknowledgments**
Various discussions with Rick Rudell were helpful.
**References**
|
{"Source-Url": "http://www.cecs.uci.edu/~papers/compendium94-03/papers/1994/iccad94/pdffiles/10a_1.pdf", "len_cl100k_base": 8715, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22701, "total-output-tokens": 9580, "length": "2e13", "weborganizer": {"__label__adult": 0.0005278587341308594, "__label__art_design": 0.0008335113525390625, "__label__crime_law": 0.0006861686706542969, "__label__education_jobs": 0.0005578994750976562, "__label__entertainment": 0.00015294551849365234, "__label__fashion_beauty": 0.0002903938293457031, "__label__finance_business": 0.0006918907165527344, "__label__food_dining": 0.0004620552062988281, "__label__games": 0.001377105712890625, "__label__hardware": 0.0167083740234375, "__label__health": 0.00054931640625, "__label__history": 0.0005507469177246094, "__label__home_hobbies": 0.0002639293670654297, "__label__industrial": 0.0023212432861328125, "__label__literature": 0.0002378225326538086, "__label__politics": 0.0005640983581542969, "__label__religion": 0.0008187294006347656, "__label__science_tech": 0.302001953125, "__label__social_life": 6.824731826782227e-05, "__label__software": 0.0112152099609375, "__label__software_dev": 0.65673828125, "__label__sports_fitness": 0.0004887580871582031, "__label__transportation": 0.0015764236450195312, "__label__travel": 0.00030612945556640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39255, 0.07059]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39255, 0.44951]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39255, 0.91059]], "google_gemma-3-12b-it_contains_pii": [[0, 5610, false], [5610, 12381, null], [12381, 16947, null], [16947, 25494, null], [25494, 34146, null], [34146, 39255, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5610, true], [5610, 12381, null], [12381, 16947, null], [16947, 25494, null], [25494, 34146, null], [34146, 39255, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39255, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39255, null]], "pdf_page_numbers": [[0, 5610, 1], [5610, 12381, 2], [12381, 16947, 3], [16947, 25494, 4], [25494, 34146, 5], [34146, 39255, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39255, 0.09028]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
82b6eafa08c8a578cbab6fad4edd9a4ba19a2a5d
|
Ornaments for Proof Reuse in Coq
Talia Ringer
University of Washington, USA
tringer@cs.washington.edu
Nathaniel Yazdani
University of Washington, USA
nyazdani@cs.washington.edu
John Leo
Halfaya Research, USA
leo@halfaya.org
Dan Grossman
University of Washington, USA
djg@cs.washington.edu
Abstract
Ornaments express relations between inductive types with the same inductive structure. We implement fully automatic proof reuse for a particular class of ornaments in a Coq plugin, and show how such a tool can give programmers the rewards of using indexed inductive types while automating away many of the costs. The plugin works directly on Coq code; it is the first ornamentation tool for a non-embedded dependently typed language. It is also the first tool to automatically identify ornaments: To lift a function or proof, the user must provide only the source type, the destination type, and the source function or proof. In taking advantage of the mathematical properties of ornaments, our approach produces faster functions and smaller terms than a more general approach to proof reuse in Coq.
2012 ACM Subject Classification Software and its engineering → Formal software verification
Keywords and phrases ornaments, proof reuse, proof automation
Digital Object Identifier 10.4230/LIPIcs.ITP.2019.26
Supplement Material The Coq plugin, examples, and case study code for this paper can be found at http://github.com/uwplse/ornamental-search/tree/itp+equiv.
Acknowledgements We thank Jasper Hugunin, James Wilcox, Jason Gross, Pavel Panchekha, and Marisa Kirisame for ideas that helped inform tool design. We thank Thomas Williams, Josh Ko, Matthieu Sozeau, Cyril Cohen, Nicolas Tabareau, and Enrico Tassi for help navigating related work. We thank Emilio J. Gallego Arias, Gaëtan Gilbert, Pierre-Marie Pédrot, and Yves Bertot for help understanding Coq plugin APIs. We thank Shachar Itzhaky and Tej Chajed for ideas for future directions. We thank the UW and UCSD programming languages labs for feedback. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1256082. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
1 Introduction
Indexed inductive types make it possible to internalize data into the type level, eliminating the need for certain functions and proofs. Consider, for example, a theorem from the Coq standard library [17] which states that mapping a function over lists preserves length:
\[
\text{map\_length} \ T_1 \ T_2 \ (f : T_1 \to T_2) : \forall (l : \text{list} \ T), \ \text{length} \ (\text{List.map} \ f \ l) = \text{length} \ l.
\]
One way to eliminate the need for this theorem is to internalize the length of a list into its type, creating a dependently typed vector (Figure 1). The map function for vectors in Coq’s standard library, for example, carries a proof that it preserves length:
\[
\text{Vector.map}\ {T_1}\ {T_2}\ (f : T_1 \to T_2) : \forall (n : \text{nat})\ (v : \text{vector}\ T_1\ n), \text{vector}\ T_2\ n.
\]
so that a theorem like \text{map_length} is no longer necessary.
Unfortunately, for all of the benefits they bring, indexed inductive types are notoriously difficult to use. Dependently typed vectors, for example, impose proof obligations about their lengths on the user; these can quickly spiral out of control. In recent coq-club threads asking for advice on how to use dependently typed vectors, experts called them “not suitable for extended use” [7] and noted that “almost no one should be using [them] for anything” [8].
We show how proof reuse – reusing existing proofs to derive new proofs – can tackle many of the challenges posed by indexed inductive types, allowing the user to move between unindexed and indexed versions of a type (for example, lists and vectors) and reap the benefits of indexed types without many of the costs. We focus in particular on the benefits of this approach in deriving functions and proofs for fully-determined indexed types, when the index is a fold over the unindexed version (such as the length of a list). In our approach, the user writes functions and proofs over the unindexed version, and a tool then automatically lifts those functions and proofs to the indexed version. The user can then switch back to working with the unindexed version by running the tool in the opposite direction. In that way, the user can use lists when lists are convenient, and vectors when vectors are convenient.
Our approach uses ornaments [23], which express relations between types that preserve inductive structure, and which enable lifting of functions and proofs along those relations. Recent work introduced ornaments to a subset of ML and was heavily focused on automatically lifting functions [33]; until now, such an approach was not available in a dependently typed language. Existing implementations of ornaments in dependently typed languages work only in embedded languages, and have little to no automation [20, 23, 11].
Our main contribution is a Coq plugin for automatic function and proof reuse using ornaments. Our plugin DEVOID (Dependent Equivalences Via Ornamenting Inductive Definitions) works directly on Coq code, rather than on an embedded language. DEVOID automates lifting functions and proofs along algebraic ornaments [23], a particular class of ornaments that represent fully-determined indexed types like lists and vectors. DEVOID implements an algorithm to search for ornaments between these types – to the best of our knowledge, the first search algorithm for ornaments – and an algorithm to lift functions and proofs along the ornaments it discovers.
We motivate (Section 2), specify (Section 3), and formalize (Section 4) the search and lifting algorithms that DEVOID implements (Section 5). A comparison to a more general proof reuse approach (Section 6) demonstrates the benefits of using ornaments: DEVOID imposes less of a proof burden on the user, and produces smaller terms and faster functions.
2 Motivating Example: Porting a Library
DEVOID is a plugin for Coq 8.8; it can be found in the repository linked to as Supplement Material under the abstract of this paper. To see how it works, consider an example using the types from Figure 1, the code for which is in Example.v. In this example, we lift two list zip functions and a proof of a theorem relating them from the Haskell CoreSpec library [29]:
```plaintext
zip `{T₁, T₂}`: list T₁ → list T₂ → list (T₁ * T₂).
zip_with `{T₁, T₂, T₃}` f: T₁ → T₂ → T₃: list T₁ → list T₂ → list T₃.
zip_with_is_zip `{T₁, T₂}`: ∀(l₁:list T₁)(l₂:list T₂), zip_with pair l₁ l₂ = zip l₁ l₂.
```
DEVOID runs a preprocessing step before lifting, which we describe in Section 5; we assume this step has already run. We use the cyan background color to denote tool-produced terms and the names that refer to them. We run DEVOID to lift functions and proofs from lists to vectors, but it can also lift in the opposite direction.
**Step 1: Search.** We first use DEVOID’s Find ornament command to search for the relation between lists and vectors:
```
Find ornament list vector.
```
This produces functions which together form an equivalence (denoted ≃):
```
list T ≃ Σ(n:nat).vector T n
```
**Step 2: Lift.** We then lift our functions and proofs along that equivalence using DEVOID’s Lift command. For example, to lift zip, we run the command:
```
Lift list vector in zip as zipV_p.
```
This produces a function with this type:
```
zipV_p `{T₁, T₂}`: Σ n.vector T₁ n → Σ n.vector T₂ n → Σ n.vector (T₁ * T₂) n.
```
that behaves like zip, but whose body no longer refers to lists. We lift our proof similarly:
```
Lift list vector in zip_with_is_zip as zip_with_is_zipV_p.
```
This produces a proof of the analogous result (denoting projections by π₁ and π₂):
```
zip_with_is_zipV_p `{T₁, T₂}`: ∀(v₁:Σ n.vector T₁ n)(v₂:Σ n.vector T₂ n),
zip_withV_p pair (∃(π₁ v₁))(∃(π₂ v₂)) =
zipV_p (∃(π₁ v₁))(∃(π₂ v₂)).
```
that no longer refers to lists, zip, or zip_with in any way.
**Step 3: Unpack.** The lifted terms operate over vectors whose lengths are packed inside of a sigma type. While this lets Lift provide strong theoretical guarantees, it can make it difficult to interface with the lifted code. We can recover unpacked terms using DEVOID’s Unpack command. For example, to unpack zipV_p, we run the command:
```
Unpack zipV_p as zipV.
```
This produces functions and proofs that operate directly over vectors, like zipV:
```
zipV `{T₁, T₂}` {n₁} v₁ {n₂} v₂ : vector (T₁ * T₂) (π₁ (zipV_p (n₁ v₁) (n₂ v₂))),
vector (T₁ * T₂) (π₁ (zipV_p (n₁ v₁) (n₂ v₂))).
```
and \texttt{zip\_with\_is\_zipV}:
\[
\text{zip\_with\_is\_zipV} : \forall \{T_1 \ T_2\} \{n_1\} \{v_1\} \{n_2\} \{v_2\}, \quad \text{eq\_dep \_\_\_ (zip\_withV \ pair \ v_1 \ v_2) \_ (zipV v_1 v_2)}.
\]
Step 4: Interface. For any two inputs of the same length, \texttt{zipV} and \texttt{zipV\_with} contain proofs that the output has the same length as the inputs. However, the types obscure this information. Example.v explains how to recover more user-friendly types, like that of \texttt{zipV\_uf}:
\[
\text{zipV\_uf} \{T_1 \ T_2\} \{n\} : \text{vector } T_1 \ n \rightarrow \text{vector } T_2 \ n \rightarrow \text{vector } (T_1 \ast T_2) \ n.
\]
and that of \texttt{zip\_withV\_uf}:
\[
\text{zip\_withV\_uf} \{T_1 \ T_2 \ T_3\} \{f\} \{n\} : \text{vector } T_1 \ n \rightarrow \text{vector } T_2 \ n \rightarrow \text{vector } T_3 \ n.
\]
which both restrict input lengths. We can then use our lifted functions and proofs in client code. For example, we can write a different version of Coq’s \texttt{BVand} function for bitvectors:
\[
\text{BVand \uf (n)} \{v_1\} \{v_2\} : \text{vector } \text{bool} \ n \rightarrow \text{vector } \text{bool} \ n \rightarrow \text{vector } \text{bool} \ n := \text{zip\_withV\_uf \ andb \ v_1 \ v_2}.
\]
By working over lists, we are able to reason about only the interesting pieces, thinking about indices only when relevant; in contrast, when writing proofs over vectors, even simple theorems can generate tricky proof obligations. With DEVOID, the programmer can use the lifted functions and proofs to interface with code that uses vectors, then switch back to lists when vectors are unmanageable. In essence, ornaments form the glue between these types.
3 Specification
This section specifies the two commands that DEVOID implements:
1. \textbf{Find ornament} searches for ornaments (specified in Section 3.1, described in Section 4.1).
2. \textbf{Lift} lifts along those ornaments (specified in Section 3.2, described in Section 4.2).
**Algebraic Ornaments.** DEVOID searches for and lifts along \textit{algebraic ornaments} in particular. An algebraic ornament relates an inductive type \textit{A} to an indexed version of that type \textit{B} with a new index of type \textit{I}, where the new index is fully determined by \textit{a unique fold over A}. For example, \texttt{vector} is exactly \texttt{list} with a new index of type \texttt{nat}, where the new index is fully determined by the \texttt{length} function. Consequentially, there are two functions:
\[
\texttt{ltv} : \text{list } T \rightarrow \Sigma(n : \text{nat}).\text{vector } T \ n.
\]
\[
\texttt{vtl} : \Sigma(n : \text{nat}).\text{vector } T \ n \rightarrow \text{list } T.
\]
that are mutual inverses:
\[
\forall \ (l : \text{list } T), \quad \texttt{vtl (ltv l)} = l.
\]
\[
\forall \ (v : \Sigma(n : \text{nat}).\text{vector } T \ n), \quad \texttt{ltv (vtl v)} = v.
\]
and therefore form the type equivalence from Section 2. Moreover, since the new index is fully determined by \texttt{length}, we can relate \texttt{length} to \texttt{ltv}:
\[
\forall \ (l : \text{list } T), \quad \texttt{length l} = \pi_1 \ (\texttt{ltv l}).
\]
In general, we can view an algebraic ornament as a type equivalence:
\[ A \vec{i} \simeq \Sigma(n: I_B \vec{i}).B \text{ (index } n \vec{i}) \]
where \( \vec{i} \) are the indices of \( A \), \( I_B \) is a function over those indices, and the index operation inserts the new index \( n \) at the right offset. Such a type equivalence consists of two functions [32]:
\[
\text{promote} : A \vec{i} \rightarrow \Sigma(n: I_B \vec{i}).B \text{ (index } n \vec{i})
\]
\[
\text{forget} : \Sigma(n: I_B \vec{i}).B \text{ (index } n \vec{i}) \rightarrow A \vec{i}.
\]
that are mutual inverses:\footnote{The adjunction condition follows from section and retraction.}
\[
\text{section} : \forall (a : A \vec{i}), \text{ forget (promote a) } = a.
\]
\[
\text{retraction} : \forall (b : \Sigma(n: I_B \vec{i}).B \text{ (index } n \vec{i})), \text{ promote (forget } b) = b.
\]
An algebraic ornament is additionally equipped with an indexer, which is a unique fold:
\[
\text{indexer} : A \vec{i} \rightarrow I_B \vec{i}.
\]
which projects the promoted index:
\[
\text{coherence} : \forall (a : A \vec{i}), \text{ indexer } a = \pi_1 (\text{promote } a).
\]
Following existing work [20], we call this equivalence the \textit{ornamental promotion isomorphism}; when it holds and the indexer exists, we say that \( B \) is an algebraic ornament of \( A \).
\textbf{Find ornament} searches for algebraic ornaments between types and is, to the best of our knowledge, the first search algorithm for ornaments. \textsc{Lift} then lifts functions and proofs along those ornaments, removing all references to the old type. Both commands make some additional assumptions for simplicity; detailed explanations for these are in Assumptions.v.
\section{Find ornament}
In their original form, ornaments are a programming mechanism: Given a type \( A \), an ornament determines some new type \( B \). We invert this process for algebraic ornaments: Given types \( A \) and \( B \), \textsc{Dvoid} searches for an ornament between them. This is possible for algebraic ornaments precisely because the indexer is extensionally unique. For example, all possible indexers for \texttt{list} and \texttt{vector} must compute the length of a list; if we were to try doubling the length instead, we would not be able to satisfy the equivalence.
\textbf{Find ornament} takes two inductive types and searches for the components of the ornamental promotion isomorphism between them:
\begin{itemize}
\item \textbf{Inputs}: Inductive types \( A \) and \( B \), assuming:
\begin{itemize}
\item \( B \) is an algebraic ornament of \( A \),
\item \( B \) has the same number of constructors in the same order as \( A \),
\item \( A \) and \( B \) do not contain recursive references to themselves under products, and
\item for every recursive reference to \( A \) in \( A \), there is exactly one new hypothesis in \( B \), which is exactly the new index of the corresponding recursive reference in \( B \).
\end{itemize}
\item \textbf{Outputs}: Functions \texttt{promote}, \texttt{forget}, and \texttt{indexer}, guaranteeing:
\begin{itemize}
\item the outputs form the ornamental promotion isomorphism between the inputs.
\end{itemize}
\end{itemize}
\textbf{Find ornament} includes an option to generate a proof that the outputs form the ornamental promotion isomorphism; by default, this option is false, since \textsc{Lift} does not need this proof.
3.2 Lift
Lift lifts a term along the ornamental promotion isomorphism between $A$ and $B$. That is, it lifts types to corresponding types and terms of those types to corresponding terms:
\[
\begin{align*}
\text{Lift list vector in list as vector_p.} & \quad (\ast \text{ vector_p T := } \Sigma (n : \text{nat}).\text{vector T n }\ast) \\
\text{Lift list vector in (cons 5 nil) as v_p.} & \quad (\ast \text{ v_p := } \exists 1 (\text{consV 0 5 nilV }\ast)
\end{align*}
\]
Furthermore, it recursively preserves this equivalence, lifting non-dependent functions like \text{zip} so that they map equivalent inputs to equivalent outputs:
\[
\forall \{T_1, T_2\} \; l_1 \; l_2, \; \text{promote} \; (\text{zip l_1 l_2}) = \text{zipV_p} \; (\text{promote} \; l_1) \; (\text{promote} \; l_2).
\]
This intuition breaks down with dependent types. With equivalence alone, we can’t state the relationship between \text{zip_with_is_zip} and \text{zip_with_is_zipV_p}, since the unlifted conclusion:
\[
\text{zip_with pair l_1 l_2} = \text{zip l_1 l_2}.
\]
does not have the same type as the conclusion of the lifted version applied to promoted arguments; any relation between these terms must be heterogenous.
In particular, Lift preserves the univalent parametric relation \cite{30}, a heterogenous parametric relation that strengthens an existing parametric relation for dependent types \cite{2} to make it possible to state preservation of an equivalence: Two terms $t$ and $t’$ are related by the univalent parametric relation $[[\Gamma]]_u \vdash [t]_u : [[T]]_u \; t \; t’$ at type $T$ in environment $\Gamma$ if they are equivalent up to transport. The details of this relation can be found in the cited work.
Lift preserves this relation using the components that Find ornament discovers, and additionally guarantees that the lifted term does not refer to the old type in any way:
- **Inputs:** The inputs to and outputs from Find ornament, along with a term $t$, assuming:
- the assumptions and guarantees from Find ornament hold,
- $I_B$ is not $A$,
- $t$ is well-typed and fully $\eta$-expanded,
- $t$ does not apply promote or forget, and
- $t$ does not reference $B$.
- **Outputs:** A term $t’$, guaranteeing:
- if $t$ is $A \vec{r}$, then $t’$ is $\Sigma(n : I_B \vec{r}) . B \; (\text{index n } \vec{r})$,
- $t’$ does not reference $A$, and
- if in the current environment $\Gamma \vdash t : T$, then $[[\Gamma]]_u \vdash [t]_u : [[T]]_u \; t \; t’$.
Lift does not require a proof that the input components form the ornamental promotion isomorphism, but they must for the guarantees to hold. It can operate in either direction, promoting from $A$ to packed $B$ or forgetting in the opposite direction; the specification for the forgetful direction is similar, with extra restrictions on how $B$ is used within $t$.
4 Algorithms
This section describes the algorithms that implement the specifications from Section 3.
**Presentation.** We present both algorithms relationally, using a set of judgments; to turn these relations into algorithms, prioritize the rules by running the derivations in order, falling back to the original term when no rules match. The default rule for a list of terms is to run the derivation on each element of the list individually.
\{ i \} \in \mathbb{N}, \ \{ v \} \in \text{Vars}, \ \{ \lambda \} \in \{ \text{Prop, Set, Type} \( \{ i \} \) \}
\{ t \} ::= \{ v \} | \{ \lambda \} | \Pi(\{ \lambda \} \cdot \{ t \} \cdot \{ t \} |
\lambda ((\{ v \} \cdot \{ t \}) \cdot \{ t \} \cdot \{ t \} |
\text{Ind}(\{ \lambda \}) \{ \langle \{ t \}, \{ t \} \rangle \cdot \{ \langle \{ t \}, \langle \{ \lambda \} \rangle \rangle \}) \}
\text{Common definitions for both algorithms.}
\begin{align*}
A & := \text{Ind}(T_B : \Pi((i_A : X_A).s_A)\{C_{A_1}, \ldots, C_{A_n}\}) \\
B & := \text{Ind}(T_B : \Pi((i_B : X_B).s_B)\{C_{B_1}, \ldots, C_{B_n}\}) \\
\forall 1 \leq i \leq n, & \ \\
E_{A_i} (p_A : P_A) & := \xi(A, p_A, \text{Constr}(i, A), C_{A_i}) \\
E_{B_i} (p_B : P_B) & := \xi(B, p_B, \text{Constr}(i, B), C_{B_i}) \\
P_A & := \Pi((i_A : X_A)(a : A) t_A).s_A \\
P_B & := \Pi((i_B : X_B)(b : B) t_B).s_B \\
index & := \text{insert (off } A B) \\
deindex & := \text{remove (off } A B)
\end{align*}
Notes on Syntax. The language the algorithms operate over is CIC with primitive eliminators; this is a simplified version of the type theory underlying Coq. Figure 2 contains the syntax (which includes variables, sorts, product types, functions, inductive types, constructors, and eliminators), as well as the syntax for some judgments and operations, the rules for which are standard and thus omitted. For simplicity of presentation, we assume variables are names; we assume that all names are fresh. As in Coq, we assume the existence of an inductive type \( \Sigma \) for sigma types with projections \( \pi_1 \) and \( \pi_r \); for simplicity, we assume projections are primitive. Throughout, we use \( \vec{i} \) and \( \{t_1, \ldots, t_n\} \) to denote lists of terms, and we use \( \vec{i}[j] \) to denote accessing the element of the list \( \vec{i} \) at offset \( j \).
Common Definitions. The algorithms assume list insertion and removal functions \text{insert} and \text{remove}, plus two functions \text{DEVOID} implements: \text{off} computes the offset of the new index of type \( I_B \) in \( B \)'s indices, and \text{new} determines whether a hypothesis in a case of the eliminator type of \( B \) is new. Figure 3 contains other common definitions, the names for which are reserved: The \text{index} and \text{deindex} functions insert an index into and remove an index from a list at the index computed by \text{off}. Input type \( A \) expands to an inductive type with indices of types \( X_A \), sort \( s_A \), and constructors \( \{C_{A_1}, \ldots, C_{A_n}\} \). \( P_A \) denotes the type of the motive of the eliminator of \( A \), and each \( E_{A_i} \) denotes the type of the eliminator of the \( i \)th constructor of \( A \). Analogous names are also reserved for input type \( B \).
4.1 Find ornament
The \text{Find ornament} algorithm implements the specification from Section 3.1. It builds on three intermediate steps: one to generate each of \text{indexer}, \text{promote}, and \text{forget}. Figure 4 shows the algorithm for generating \text{indexer}. The algorithms for generating \text{promote} and \text{forget} are similar; Figure 5 shows only the derivations for generating \text{promote} that are different from those for generating \text{indexer}, and the derivations for generating \text{forget} are omitted.
4.1.1 Searching for the Indexer
Search generates the \text{indexer} by traversing the types of the eliminators for \( A \) and \( B \) in parallel using the algorithm from Figure 4, which consists of three judgments: one to generate the motive, one to generate each case, and one to compose the motive and cases.
Ornaments for Proof Reuse in Coq
\[
\Gamma \vdash (T_A, T_B) \psi_{\text{i}_m} t
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot (I_B \vec{i}_B) \beta
\]
**Index-Motive**
**Index-Hypothesis**
\[
\Gamma \vdash (T_A, T_B) \psi_{\text{i}_m} t
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} p
\]
\[
\Gamma, n_A : p \vdash (b_A, b_B[n_A/\vec{i}B[\text{off} A B]]) \psi_{\text{i}_m} t
\]
**Index-Hypothesis**
\[
\Gamma \vdash (T_A, T_B) \psi_{\text{i}_m} t
\]
**Index-Conclusion**
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot t
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (T_A, T_B) \psi_{\text{i}_m} t
\]
\[
\Gamma \vdash (T_A, T_B) \psi_{\text{i}_m} t
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
\[
\Gamma \vdash (A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot \text{Elim}(a, p) \vec{f}
\]
**Generating the Motive.** The \((T_A, T_B) \psi_{\text{i}_m} t\) judgment consists of only the derivation **Index-Motive**, which computes the indexer motive from the types \(A\) and \(B\) (expanded in Figure 3). It does this by constructing a function \((A, B) \psi_{\text{i}_m} \lambda (\vec{i} \vec{A} : X_{\vec{A}})(a : A \vec{i}_A) \cdot t\) with \(A\) and its indices as premises, and the type \(I_B\) in the conclusion with the appropriate indices. Consider \(\text{list}\) and \(\text{vector}\):
\[
\text{list}\ T := \text{Ind} (T_Y : \text{Type}) \{ \ldots \}
\]
\[
\text{vector}\ T := \text{Ind} (T_Y : \Pi(n : \text{nat}).\text{Type}) \{ \ldots \}
\]
For these types, **Index-Motive** computes the motive:
\[
\lambda (1: \text{list}\ T) . \text{nat}
\]
**Generating Each Case.** The \((T_A, T_B) \psi_{\text{i}_m} t\) judgment generates each case of the indexer by traversing in parallel the corresponding cases of the eliminator types for \(A\) and \(B\). It consists of four derivations: **Index-Conclusion** handles base cases and conclusions of inductive cases, while **Index-Hypothesis**, **Index-Ind**, and **Index-Prod** recurse into products.
**Index-Hypothesis** handles each new hypothesis that corresponds to a new index in an inductive hypothesis of an inductive case of the eliminator type for \(B\). It adds the new index to the environment, then recurses into the body of only the type for which the index already exists. For example, in the inductive case of \(\text{list}\) and \(\text{vector}\), **new** determines that \(n\) is the new hypothesis. **Index-Hypothesis** then recurses into the body of only the \(\text{vector}\) case:
\[
\Pi (t_1 : T) (1: \text{list}\ T) (I_{H_1 : P_A} l_1) , \ldots \quad \Pi (t_v : T) (v : \text{vector}\ T n) (I_{H_v : P_B} n v) , \ldots
\]
**Index-Prod** is next. It recurses into product types when the hypothesis is neither a new index nor an inductive hypothesis. Here, it runs twice, recursing into the body and substituting names until it hits the inductive hypothesis for both types:
\[
\Pi (I_{H_1 : P_A} l_1) , p_A (\text{cons} t_1 l) \quad \Pi (I_{H_v : P_B} n) , p_B (\text{cons} V n t_v l)
\]
4.1.2 Searching for Promote and Forget
Figure 5 shows the interesting derivations for the judgment \((T_A, T_B) \Downarrow_{p} t\) that searches for promote: \textsc{Promote-Motive} identifies the motive as \(B\) with a new index (which it computes using \texttt{indexer}, denoted by metavariable \(\pi\)). When \textsc{Promote-IH} recurses, it substitutes the inductive hypothesis for the term rather than for its index, and it substitutes the new index (which it also computes using \texttt{indexer}) inside of that term. \textsc{Promote-Conclusion} returns the entire term, rather than its index. Finally, \textsc{Promote-Ind} not only recurses into each case, but also packs the result.
The omitted derivations to search for \texttt{forget} are similar, except that the domain and range are switched. Consequentially, \texttt{indexer} is never needed; \texttt{FORGET-MOTIVE} removes the index rather than inserting it, and \texttt{FORGET-IH} no longer substitutes the index. Additionally, \texttt{FORGET-HYPOTHESIS} adds the hypothesis for the new index rather than skipping it, and \texttt{FORGET-IND} eliminates over the projection rather than packing the result.
### 4.1.3 Core Search Algorithm
The core search algorithm produces \texttt{indexer}, \texttt{promote}, and \texttt{forget}, then composes them into a tuple. This tuple is how \texttt{DEVOID} represents ornaments internally. \texttt{DEVOID} includes an option to generate a proof that these components form the ornamental promotion isomorphism; by default, this is disabled, since \texttt{Lift} does not need this proof. The implementation of this option gives intuition for correctness of the search algorithm, and is described in Section 5.3.
### 4.2 Lift
The \texttt{Lift} algorithm implements the specification from Section 3.2. We show only one direction of the algorithm, promoting from \texttt{A} to packed \texttt{B}; the forgetful direction is similar. The core algorithm (Figure 9) builds on a set of common definitions (Figure 6) and two intermediate judgments: one to lift eliminators (Figure 7) and one to lift constructors (Figure 8).
#### Common Definitions
The common definitions (Figure 6) define some useful syntax: $\uparrow$ applies \texttt{promote}, $\downarrow$ applies \texttt{forget}, and $\pi_I B$ applies \texttt{indexer}. $\exists I B$ packs a term of type \texttt{B} into an existential with the index at the appropriate offset. $\uparrow B$ and $\uparrow I B$ promote and then project; $\downarrow A$ packs and forgets, and $\downarrow I B$ packs, forgets, and then applies \texttt{indexer} to project the index.
#### 4.2.1 Lifting Eliminators
The $\Gamma \vdash t : t'$ judgment (Figure 7) defines rules for lifting the motive and case of an eliminator, changing the domain of induction from \texttt{A} to \texttt{B}. The intuition is that any term of type \texttt{A} is the result of forgetting some term of type packed \texttt{B}. Then, since \texttt{A} and \texttt{B} have the same inductive structure, we can lift the eliminator of \texttt{A} to the eliminator of \texttt{B}, and move that forgetfulness inside of each case. For example, the following terms are propositionally equal:
\[
\begin{align*}
\text{Elim}(\downarrow A b, p A)\{ & f_{\text{nil}}, \lambda(t : T)(l : \text{list } T)(\text{IH}_l : p A l). \text{f}_{\text{cons}} t l \text{ IH}_l) \\
\text{Elim}(b, \lambda(n : \text{nat})(v : \text{vector } T n). p A (\downarrow A v))\{ & f_{\text{nil}}, \lambda(n : \text{nat})(t : T)(v : \text{vector } T n)(\text{IH}_v : p A (\downarrow A v)). \text{f}_{\text{cons}} t v (\downarrow A v) \text{ IH}_v)
\end{align*}
\]
The induction rules implement this transformation. \texttt{Case} lifts a case of the eliminator by first recursively lifting the motive, then using the lifted motive to compute the type of the new case, and then using that type to compute the body of the new case. In the example
above, when lifting the inductive case, it first recursively lifts the motive \( p_A \) using \textsc{Motive}, which drops the index, packs and forgets the argument of type \( B \), and then \( \beta \)-reduces the result, eliminating references to \( B \). This produces the new motive:
\[
\lambda (n : \text{nat})(v : \text{vector } T n).p_A (\downarrow A v)
\]
which \textsc{Case} then uses to compute the type of the inductive case of the eliminator for \( B \):
\[
\Pi (t : \text{nat})(v : \text{vector } T n)(\text{IH}_v : p_A (\downarrow A v)).p_A (\downarrow A (\text{cons}_V t v (S n) v))
\]
The \( \Gamma \vdash (t, T) \uparrow_{E_a} t' \) judgment then uses that type to compute the lifted function body. It computes this in a similar way to \textsc{Motive}, except that there are as many indices to drop and arguments to pack and forget as there are inductive hypotheses, and these do not occur in predictable places, so more rules are involved. This computes the new function:
\[
\lambda (n : \text{nat})(t v : T)(v : \text{vector } T n)(\text{IH}_v : p_A (\downarrow A v)).f_{\text{cons}} t v (\downarrow_A v) \text{ IH}_v
\]
### 4.2.2 Lifting Constructors
The \( \Gamma \vdash t \uparrow_C t' \) judgment (Figure 8) lifts applications of constructors of \( A \) to applications of constructors of \( B \). This judgment computes one step of the promotion, leaving the recursive lifting of the arguments to the final algorithm. Using the same types, in the base case:
\[
\uparrow \text{nil} \equiv_{\text{sh}} \exists 0 \text{nilV}
\]
and in the inductive case:
\[
\uparrow (\text{cons } t \text{ l}) \equiv_{\text{sh}} \exists (S (\uparrow_{B} \text{ l})) (\text{cons}_V (\uparrow_{B} \text{ l}) t (\uparrow_{B} \text{ l}))
\]
This derivation consists of only one rule: NORMALIZE, which normalizes the promotion of the constructor. This is guaranteed to succeed because the application of the constructor is fully \eta-expanded. The core algorithm later internalizes the promotion functions in the result.
### 4.2.3 Core Lifting Algorithm
The core algorithm (Figure 9) builds on these intermediate judgments. The interesting derivations for correctness are the first six: LIFT-\textsc{Elim} and LIFT-\textsc{Constr} use the judgments for lifting eliminators and constructors of \textit{A}. INTERNALIZE internalizes the explicit promote functions from the lifted constructors to recursive applications of the algorithm. RETRACT and COHERENCE use the respective properties of the ornamental promotion isomorphism metatheoretically: the first to drop the explicit forget functions from the lifted eliminators, and the second to lift the indexer to a projection (in the forgetful direction, SECTION replaces RETRACTION). Finally, EQUIVALENCE lifts \textit{A} along the equivalence to packed \textit{B}. The remaining derivations recursecpectively.
### 5 Implementation
The DEVOID Coq plugin implements the algorithms from Section 4; the link to the code is in Supplement Material. DEVOID cannot produce an ill-typed term, since Coq type checks all terms that plugins produce and rejects ill-typed terms. The implementations of find ornament (search.ml) and Lift (lift.ml) are mostly the same as the algorithms, but with changes to address implementation challenges that scale the algorithms to a Coq tool for proof engineers. This section describes a sample of these changes from each of three categories: addressing differences between Coq and the type theory that the algorithms assume (Section 5.1), optimizing for efficiency (Section 5.2), and improving usability (Section 5.3).
5.1 Addressing Language Differences
Fixpoints. Coq implements eliminators in terms of pattern matching and fixpoints. To handle terms that use these features, DEVOID includes a Preprocess command that translates these terms into equivalent eliminator applications. This command can preprocess a definition (like zip from Section 2) or an entire module (like List, as shown in ListToVect.v) for lifting. It currently supports fixpoints that are structurally recursive on only immediate substructures. To translate such a fixpoint, it first extracts a motive, then generates each case by partially reducing the function’s body under a hypothetical context for the constructor arguments. This is enough to preprocess List; Section 8 discusses possible extensions.
Non-Primitive Projections. By default, projections in Coq are non-primitive. That is, this:
\[ \forall (T : Type) (v : \Sigma (n : nat).vector T n), v = \exists (\pi_l v) (\pi_r v). \]
cannot be proven by reflexivity alone (see Projections.v). Therefore, DEVOID must pack terms like v into existentials; otherwise, lifting will sometimes fail. This is why the type of zip_with_is_zipV_p in the example from Section 2 packs v₁ and v₂. For the sake of performance and readability of lifted code, DEVOID is strategic about when it packs.
Constants. Because Coq has constants, the implementation of Normalize refolds [3] after normalizing. That is, it acts like the simpl tactic in Coq, but with special support for sigma types. For example, to lift the cons constructor of a list, after normalizing the promotion of cons t l, DEVOID substitutes the projections of the promotion of l for their normal forms, which determines and saves the following fact:
\[ \forall \{T\} (l : list T), \uparrow (\text{cons } t \ l) = \exists (S (\uparrow_{\uparrow B} l)) (\text{consV } (\uparrow_{\uparrow B} l) t (\uparrow_{\uparrow B} l)). \]
Refolding helps produce more readable lifted code. It also improves lifting performance, since it occurs just once for each constructor.
5.2 Optimizing for Efficiency
Delayed Reduction. When lifting eliminators, DEVOID computes a list of arguments and delays reduction. It computes this list backwards, storing the new indices that inductive hypotheses refer to as it recurses. This removes the call to new in the premise of Drop-Index.
Lazy η-Expansion. The lifting algorithm assumes that all terms are fully η-expanded. Sometimes, however, η-expansion is not necessary. For efficiency, rather than fully η-expand ahead of time, DEVOID η-expands lazily, only when it is necessary for correctness.
Caching. To prevent extra recursion, DEVOID caches the outputs of search, as well as lifted constants, inductive types, and constructors. Since these are constants, lookup is low-cost.
5.3 Improving Usability
Correctness Proofs. DEVOID has options (used in Example.v) that tell search to generate proofs that its outputs are correct, thereby increasing confidence in and usefulness of those outputs. The proof of coherence is reflexivity. The intuition behind the automation to prove section and retraction (equivalence.ml) is that promote and forget map along corresponding constructors, so inductive cases preserve equalities. Thus, each inductive case of these proofs is generated by a fold that rewrites each recursive reference, with reflexivity as identity.
Unpacking. DEVOID includes an `Unpack` command (used in `Example.v`) that unpacks packed types in functions and proofs. This way, users may access unpacked terms without writing boilerplate code. For simple functions, this command packs arguments and projects results. It splits higher-order functions into two functions. For proofs that use equality, it applies one lemma convert to dependent equality, and one lemma to deal with non-primitive projections.
User-Friendly Types. `Example.v` describes how the user can recover user-friendly types after unpacking. For example, to recover a function with an output of type `vector T n`, the user lifts a proof that the length of the output of the unlifted `list` version of that function is `n`, then rewrites by that lifted proof. The intuition behind this is that this equivalence holds:
\[
\{ l : list T & length l = n \} \simeq vector T n
\]
Recovering a user-friendly type for a proof relating these functions is more complex, since it necessitates reasoning at some point about equalities between equalities. For some index types like `nat`, this follows simply from the fact that the type forms an h-set [32]: all proofs of equality between the same two terms of that type are equal. There is preliminary work on determining a general methodology for deriving user-friendly types for proofs that does not rely on any properties of the index type. The idea is to use the adjunction condition along with the proof of coherence by reflexivity; see GitHub issue #39 for the status of this work.
6 Case Study
We used DEVOID to automatically discover and lift along ornaments for two scenarios:
1. Single Iteration: from binary trees to sized binary trees
2. Multiple Iterations: from binary trees to binary search trees to AVL trees
For comparison, we also used the ornaments that DEVOID discovered to lift functions and proofs using Equivalences for Free! [30] (EFF), a more general framework for lifting across equivalences. DEVOID produced faster functions and smaller terms, especially when composing multiple iterations of lifting. In addition, DEVOID imposed little burden on the user, and the ornaments DEVOID discovered proved useful to EFF.
We chose EFF for comparison because DEVOID is the only tool for ornaments in Coq, and because doing so demonstrates the benefits of specialized automation for ornaments. DEVOID can handle only a small class of equivalences compared to EFF, and it can currently handle only incremental changes to types (one new index at a time). Our experiences suggest that it is possible to use both tools in concert. Section 7 discusses EFF in more detail.
Setup. The case study code is in the `eval` folder of the repository. For each scenario, we ran DEVOID to search for an ornament, and then lifted functions and proofs along that ornament using both DEVOID and EFF. We noted the amount of user interaction (Section 6.1), as well as the performance of lifted terms (Section 6.2). To test the performance of lifted terms, we tested runtime by taking the median of ten runs using `Time Eval vm_compute` with test values in Coq 8.8.0, and we tested size by normalizing and running `coqwc` on the result.²
² i5-5300U, at 2.30GHz, 16 GB RAM
In the first scenario, we lifted traversal functions along with proofs that their outputs are permutations of each other from binary trees (tree) to sized binary trees (Sized.tree). In the second scenario, we lifted the traversal functions to AVL trees (avl) through four intermediate types (one for each new index), and we lifted a search function from BSTs (bst) to AVL trees through one intermediate type. Both scenarios considered only full binary trees.
To fit bst and avl into algebraic ornaments for DEVOID, we used boolean indices to track invariants. While the resulting types are not the most natural definitions, this scenario demonstrates that it is possible to express interesting changes to structured types as algebraic ornaments, and that lifting across these types in DEVOID produces efficient functions.
6.1 User Experience
For each intermediate type in each scenario, we used DEVOID to discover the components of the equivalence. These components were enough for DEVOID to lift functions and proofs with no additional proof burden and no additional axioms. To use EFF, we also had to prove that these components form an equivalence; we set the appropriate option to generate these proofs using DEVOID. In addition, to use EFF, we had to prove univalent parametricity of each inductive type; these proofs were small, but required specialized knowledge. To lift the proof of the theorem pre_permutes using EFF, we had to prove the univalent parametric relation between the unlifted and lifted versions of the functions that the theorem referenced; this pulled in the functional extensionality axiom, which was not necessary using DEVOID.
In the second scenario, to simulate the incremental workflow DEVOID requires, we lifted to each intermediate type, then unpacked the result. For example, the ornament from bst to avl passed through an intermediate type; we lifted search to this type first, unpacked the result, and then repeated this process. In this scenario, using EFF differently could have saved some work relative to DEVOID, since with EFF, it is possible to skip the intermediate type;³ DEVOID is best fit where an incremental workflow is desirable.
6.2 Performance
Relative to EFF, DEVOID produced faster functions. Table 1 summarizes runtime in the first scenario for preorder, and Table 2 summarizes runtime in the second scenario for preorder and search. The inorder and postorder functions performed similarly to preorder. The functions DEVOID produced imposed modest overhead for smaller inputs, but were tens to hundreds of times faster than the functions that EFF produced for larger inputs. This performance gap was more pronounced over multiple iterations of lifting.
DEVOID also produced smaller terms: in the first scenario, 13 vs. 25 LOC for preorder, 12 vs. 24 LOC for inorder, and 17 vs. 29 LOC for postorder; and in the second scenario, 21 vs. 120 LOC for preorder, 20 vs. 119 LOC for inorder, 24 vs. 125 LOC for postorder, and 31 vs. 52 LOC for search. In the first scenario, the lifted proof of pre_permutes using DEVOID was 85 LOC; the lifted proof of pre_permutes using EFF was 1463184 LOC.
We suspect DEVOID provided these performance benefits because it directly lifted induction principles, whereas EFF produced lifted functions in terms of unlifted functions. The multiple iteration case in particular highlights this, since EFF’s approach makes lifted terms much slower and larger as the number of iterations increases, while DEVOID’s approach does not.
³ The performances of the terms that EFF produces are sensitive to the equivalence used; for a 100 node tree, this alternate workflow produced a search function which is hundreds of times slower and traversal functions which are thousands of times slower than the functions that DEVOID produced. In addition, the lifted proof of pre_permutes using EFF failed to normalize with a timeout of one hour.
Table 1 Median runtime (ms) of unlifted (tree) and lifted (Sized.tree) preorder over ten runs with test inputs ranging from about 10 to about 10000 nodes.
<table>
<thead>
<tr>
<th></th>
<th>10</th>
<th>100</th>
<th>1000</th>
<th>10000</th>
<th>100000</th>
</tr>
</thead>
<tbody>
<tr>
<td>preorder</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Unlifted</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>3.0 (1.00x)</td>
<td>37.0 (1.00x)</td>
</tr>
<tr>
<td>Devoid</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>3.0 (1.00x)</td>
<td>35.0 (0.95x)</td>
</tr>
<tr>
<td>EFF</td>
<td>0.0</td>
<td>1.0</td>
<td>27.0</td>
<td>486.5 (162.17x)</td>
<td>8078.5 (218.33x)</td>
</tr>
</tbody>
</table>
Table 2 Median runtime (ms) of unlifted (tree) and lifted (avl) preorder, plus unlifted (bst) and lifted (avl) search, over ten runs with inputs ranging from about 10 to about 100000 nodes.
<table>
<thead>
<tr>
<th></th>
<th>10</th>
<th>100</th>
<th>1000</th>
<th>10000</th>
<th>100000</th>
</tr>
</thead>
<tbody>
<tr>
<td>preorder</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Unlifted</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>3.0 (1.00x)</td>
<td>37.0 (1.00x)</td>
</tr>
<tr>
<td>Devoid</td>
<td>71.5</td>
<td>71.0</td>
<td>69.0</td>
<td>75.0 (25.00x)</td>
<td>109.0 (2.95x)</td>
</tr>
<tr>
<td>EFF</td>
<td>1.0</td>
<td>11.0</td>
<td>152.0</td>
<td>2976.5 (992.17x)</td>
<td>56636.5 (1530.72x)</td>
</tr>
<tr>
<td>search</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Unlifted</td>
<td>0.0</td>
<td>0.0</td>
<td>2.0 (1.00x)</td>
<td>3.0 (1.00x)</td>
<td>29.0 (1.00x)</td>
</tr>
<tr>
<td>Devoid</td>
<td>12.0</td>
<td>14.0</td>
<td>12.0 (6.00x)</td>
<td>15.0 (5.00x)</td>
<td>50.0 (1.72x)</td>
</tr>
<tr>
<td>EFF</td>
<td>1.0</td>
<td>5.0</td>
<td>67.0 (33.50x)</td>
<td>1062.0 (354.00x)</td>
<td>15370.5 (530.02x)</td>
</tr>
</tbody>
</table>
7 Related Work
Ornaments. Devoid automates discovery of and lifting across algebraic ornaments in a higher-order dependently typed language. In the decade since the discovery of ornaments [23], there have been a number of formalizations and embedded implementations of ornaments [10, 19, 11, 20, 9]. Devoid is the first tool for ornamentation to operate over a non-embedded dependently typed language. It essentially moves the automation-heavy approach of Ornamentation in ML [33], which operates on non-embedded ML code, into the type theory that forms the basis of theorem provers like Coq. In doing so, it takes advantage of the properties of algebraic ornaments [23]. It also introduces the first search algorithm to identify ornaments, which in the past was identified as a “gap” in the literature [20].
Lifting Proofs. Devoid identifies and lifts proofs along a specific equivalence similar to that from existing ornaments work [20]. The need to automatically lift functions and proofs across equivalences and other relations is a long-standing challenge for proof engineers [22, 1, 21, 16, 34, 6]. The univalence axiom from Homotopy Type Theory [32] enables transparent transport of proofs; cubical type theory [5] gives univalence a constructive interpretation.
Our work is closely related to Equivalences for Free! [30], which brings this full circle, using mathematical properties of univalence to enable lifting across equivalences in a substantial subset of CICω without relying on the univalence axiom. In doing so, it introduces and formalizes the relation that our specification depends upon, and implements a framework for lifting in Coq. This framework is more general than Devoid: It lifts along any equivalence, not just ornamental promotions, and can handle opaque terms, with the caveat that users must prove each equivalence themselves; Devoid requires non-opaque terms and lifts along the class of equivalences that correspond to ornamental promotions, taking advantage of the mathematical properties of ornaments to eliminate the need for explicit applications of section and retraction, and to discover and prove certain equivalences automatically. These mathematical properties allow us to automatically lift the induction principle and eliminate references to old terms, which is beneficial for performance.
Similarly, our work is related to CoqEAL [6], which transfers functions along arbitrary relations between types. As these relations do not necessarily need to be equivalences, this framework is more general than our work. Similar tradeoffs between automation and generality apply: CoqEAL produces functions that refer to the old type, and does not yet support automatic inference of relations. In addition, CoqEAL currently only supports automatic transfer of functions, and does not yet handle proofs.
These tools may provide an alternative backend for DEVOID. Furthermore, our search algorithm may help discover relations that make these tools easier to use, and our lifting algorithm may help improve automation and efficiency for certain relations in these tools.
Program and Proof Reuse. The problem that we solve is fundamentally about proof reuse, which applies software reuse principles to ITPs. There is a wealth of work in proof reuse, from tactic languages [15] and logical frameworks [4], to tools for proof abstraction and generalization [26, 18], to domain-specific methodologies [12] and frameworks [13].
DEVOID focuses on the specific problem of reuse when adding fully-determined indices to types. Other approaches to this problem include combinators which definitionally reduce to desirable terms [14] in the language Cedille, and automatic generation of conversion functions in Ghostbuster [24] for GADTs in Haskell. Our work focuses on a type theory different from both of these, in which the properties that allow for such combinators in Cedille are not present, and in which dependent types introduce challenges not present in Haskell.
DEVOID is not the first tool to combine search with reuse. Optician [25] synthesizes bidirectional string transformations; a similar approach may help extend tooling to handle transformations for low-level data. PUMPKIN PATCH [27] searches the difference in proofs for patches that can be used to repair proofs broken by changes; DEVOID uses a similar approach to identify functions that form an equivalence. The resulting tools are complementary: DEVOID supports the addition of indices and hypotheses, which PUMPKIN PATCH does not support; PUMPKIN PATCH supports changes in values, which DEVOID does not support.
8 Conclusions & Future Work
We presented DEVOID: a tool for searching for and lifting across algebraic ornaments in Coq. DEVOID is the first tool to lift across ornaments in a non-embedded dependently typed language, and to automatically infer certain kinds of ornaments from types alone. Our algorithms give efficient transport across equivalences arising from algebraic ornaments; our case study demonstrates that such automation can make lifted terms smaller and faster as part of an incremental workflow.
Future Work. A future version may support other ornaments beyond algebraic ornaments, with additional user interaction as needed; this may help support, for example, the ornament between nat and list, where list has a new element in the cons case. A future version may loosen restrictions on input types to support adding constructors while preserving inductive structure, recursive references under products, and coinductive types. Integrating with PUMPKIN PATCH [27] may help remove restrictions DEVOID makes about the hypotheses of B. Preprocess currently supports only certain fixpoints; a more general translation may help DEVOID support more terms, and discussions with Coq developers suggest that the implementation of such a translation building on work from the equations [28] plugin is in progress. Extending DEVOID to generate proofs of coherence conditions for lifted terms
may increase user confidence. Proofs that the commands that DEVOID implements satisfy their specifications may also increase user confidence. Better automating the recovery of user-friendly types may improve user experience.
References
31 Amin Timany and Bart Jacobs. First Steps Towards Cumulative Inductive Types in CIC. In *ICTAC*, 2015.
|
{"Source-Url": "http://drops.dagstuhl.de/opus/volltexte/2019/11081/pdf/LIPIcs-ITP-2019-26.pdf", "len_cl100k_base": 15252, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 72888, "total-output-tokens": 18590, "length": "2e13", "weborganizer": {"__label__adult": 0.0003247261047363281, "__label__art_design": 0.0003495216369628906, "__label__crime_law": 0.0002918243408203125, "__label__education_jobs": 0.0005369186401367188, "__label__entertainment": 6.711483001708984e-05, "__label__fashion_beauty": 0.00013685226440429688, "__label__finance_business": 0.00017273426055908203, "__label__food_dining": 0.0003345012664794922, "__label__games": 0.0005483627319335938, "__label__hardware": 0.0005044937133789062, "__label__health": 0.0004181861877441406, "__label__history": 0.00021660327911376953, "__label__home_hobbies": 7.510185241699219e-05, "__label__industrial": 0.0003070831298828125, "__label__literature": 0.0002658367156982422, "__label__politics": 0.00023472309112548828, "__label__religion": 0.0004703998565673828, "__label__science_tech": 0.014556884765625, "__label__social_life": 8.71419906616211e-05, "__label__software": 0.005664825439453125, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.0002586841583251953, "__label__transportation": 0.0003955364227294922, "__label__travel": 0.00018405914306640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60342, 0.02489]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60342, 0.31216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60342, 0.8046]], "google_gemma-3-12b-it_contains_pii": [[0, 2810, false], [2810, 6178, null], [6178, 8795, null], [8795, 11965, null], [11965, 15438, null], [15438, 18730, null], [18730, 22372, null], [22372, 28416, null], [28416, 29108, null], [29108, 32340, null], [32340, 34092, null], [34092, 35948, null], [35948, 39306, null], [39306, 42542, null], [42542, 46458, null], [46458, 50082, null], [50082, 53757, null], [53757, 57238, null], [57238, 60342, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2810, true], [2810, 6178, null], [6178, 8795, null], [8795, 11965, null], [11965, 15438, null], [15438, 18730, null], [18730, 22372, null], [22372, 28416, null], [28416, 29108, null], [29108, 32340, null], [32340, 34092, null], [34092, 35948, null], [35948, 39306, null], [39306, 42542, null], [42542, 46458, null], [46458, 50082, null], [50082, 53757, null], [53757, 57238, null], [57238, 60342, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60342, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60342, null]], "pdf_page_numbers": [[0, 2810, 1], [2810, 6178, 2], [6178, 8795, 3], [8795, 11965, 4], [11965, 15438, 5], [15438, 18730, 6], [18730, 22372, 7], [22372, 28416, 8], [28416, 29108, 9], [29108, 32340, 10], [32340, 34092, 11], [34092, 35948, 12], [35948, 39306, 13], [39306, 42542, 14], [42542, 46458, 15], [46458, 50082, 16], [50082, 53757, 17], [53757, 57238, 18], [57238, 60342, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60342, 0.03299]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
1d30a1423e3d1cadc5f5b5a3a78069d07420a59a
|
ABSTRACT
The SAS macro language gives us the power to create tools that to a large extent can think for themselves. How often have you used a macro that required your input and you thought to yourself “Why do I need to provide this information when SAS already knows it?” SAS may well already know what you are being asked to provide, but how do we direct our macro programs to self-discern the information that they need? Fortunately there are a number of functions and other tools within SAS that can intelligently provide our programs with the ability to find and utilize the information that they require.
If you provide a variable name, SAS should know its type and length; given a data set name, the list of variables should be known; given a library or libref, the full list of data sets that it contains should be known. In each of these situations there are functions that can be utilized by the macro language to determine and return these types of information. Given a libref these functions can determine the library’s physical location and the list of all the data sets it contains. Given a data set they can return the names and attributes of any of the variables that it contains. These functions can read and write data, create directories, build lists of files in a folder, and even build lists of folders.
Maximize your macro’s intelligence; learn and use these functions.
KEYWORDS
Metadata, macro language, DATA step functions, %SYSFUNC, OPEN, ATTRN, ATTRC, FETCH, CLOSE
INTRODUCTION
There are a number of ways for a macro to determine information about the operating system, the SAS environment, libraries and the files that they contain, data sets and their variables, and variable attributes and the values that they contain (Carpenter, 2016). Among others these include the use of the X statement (and related statements and functions), DICTIONARY tables, SASHELP views, data sets created by PROC CONTENTS, system options, and automatic macro variables (Carpenter and Rosenbloom, 2016). Of these various approaches DATA step functions tend to be fastest and as a bonus they support the creation of user defined macro functions.
There are also a number of DATA step functions that can be used to obtain information about the SAS environment. Although used less often in the DATA step itself, these functions are invaluable to the macro programmer. Collectively they are often referred to as metadata functions, because they tend to return information about data sets and locations. In fact they actually have a much broader list of capabilities that extends beyond the metadata. These functions can also read and write data; create, count, and eliminate both directories and files; return path and location information; and much more. Of these functions, some of the more commonly used are highlighted here.
Within the macro language, these DATA step functions are typically accessed using the %SYSFUNC or %QSYSFUNC macro functions. For each of these functions, the first argument is the DATA step function (along with its own arguments) that is to be executed, and an optional format in the second argument to control the appearance of the value returned by the function in the first argument. In this example of a TITLE statement, the DATE function is called. It returns the current date as a SAS date, and this value is then formatted using the WORDDATE18. format. The generated TITLE statement will then contain the formatted date.
Most of the macros in this paper which demonstrate these DATA step functions are themselves macro functions. As macro functions the macro itself returns a value. If you are unfamiliar with the coding techniques used to create user written macro functions, you may want to review Carpenter, 2002 for more detail than will be described here.
**VIEWING DATA SET METADATA**
Metadata is data about the data set. Each SAS data set automatically stores, as a part of the data set, information about the data set itself, hence metadata. This is the information that you see when you execute a PROC CONTENTS. Although of interest to the DATA step programmer, access to the metadata can provide huge benefits to the macro programmer.
Typically the CONTENTS procedure is used just to display the metadata, however it can also be used to store the metadata in a data set as well. This data set has one row per variable and it contains information on the individual variables as well as data set specific information such as the number of observations in the data set. A portion of the metadata data set generated by CONTENTS for the SASHELP.CLASS data set is shown below.
```sas
proc contents data=sashelp.class out=classvar noprint;
run;
```
One row per variable per data set can be obtained by using `data=sashelp._all_`. This is similar to the information contained in the view SASHELP.VCOLUMN which is shown in the next example.
Without executing a PROC CONTENTS, similar information can be found in the SASHELP view VCOLUMN, however this view has one row per variable per data set per library known to SAS.
Almost the same information can also be surfaced through the use of the SQL DICTIONARY table COLUMNS. The DESCRIBE statement can be used to list the names of the columns, while the SELECT statement can write the values held in the table.
Both SASHELP.VCOLUMN and DICTIONARY.COLUMNS are created when they are requested. This means that they will always contain current information. This also means that there can be a significant wait while these tables are created. The metadata functions described in this paper do not have this limitation.
Viewing the metadata is oftentimes sufficient, but what if during the execution of a program we need to gather some metadata item and use it in the program? How can we have our program do this dynamically? Yes we can! Enter the DATA step’s metadata functions.
**DATA SET METADATA FUNCTIONS**
The DATA step metadata functions operate directly against the metadata of a SAS data set. This means that we can avoid the expenditure of resources that are needed to create the SASHELP.VCOLUMN view or the DICTIONARY.COLUMNS table. It also means that through the use of these functions we can dynamically gather and use metadata in our programs directly.
Since these are DATA step functions, when they are to be used with the macro language you will need to invoke them through the use of the %SYSFUNC or %QSYSFUNC macro functions. This allows the DATA step function to execute during macro execution and to return the value to the macro facility.
Opening the Metadata – making it available for our use
The data set’s metadata is not automatically available for our use. Before we can access it we need to get “permission” to look at the data set’s descriptor record which contains the metadata. We gain this access through the use of the OPEN function. This function checks to see if the data set is currently available for our use. If the data set is available, the OPEN function returns a non-zero data set identifier. We never really care what the identifier is, as long as it is not zero (which would indicate that we have been denied access to the data set). After we have utilized the information in the metadata we need to release the data set so that other programs or programmers can use it. We do this by closing our access through the use of the CLOSE function.
The macro %META shown here is just a shell that highlights the use of the OPEN and CLOSE functions. These functions will almost always be present when working with a data set’s metadata.
```sas
%macro meta(dsn=class);
%local dsid;
%let dsid = %sysfunc(open(&dsn)); ➊
%if &dsid %then . . . .
<<<. . . .macro statements. . . .>>>>
%let dsid = %sysfunc(close(&dsid)); ➋
%mend meta;
```
with the CLOSE function. This function returns a 0 for success. In this example macro the identifier is also cleared by replacing it with the value returned by the CLOSE function.
The data set identifier that is returned by the OPEN function is used by many of the other metadata functions. Having a unique identifier associated with a given data set is necessary as you may wish to open the metadata of more than one data set at a time. Usually this identifier will be stored in a local macro variable. Remember although we need to store the value of the identifier, the value itself is rarely of specific interest.
Variable Information Functions
As the name implies, variable information functions return information about the variables in the data set. These functions are commonly used when you need to ask questions such as; “Is the ABC variable on this data set?”, “Does the ABC variable have a format?”, and “What is the type of the ABC variable?”. The functions of this type all start with the letters ‘VAR’, however not all functions that start with ‘VAR’ fit the category.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Returns the:</th>
</tr>
</thead>
<tbody>
<tr>
<td>VARFMT</td>
<td>Variable’s assigned format.</td>
</tr>
<tr>
<td>VARINFMT</td>
<td>Variable’s informat.</td>
</tr>
<tr>
<td>VARLABEL</td>
<td>Label of a variable.</td>
</tr>
<tr>
<td>VARLEN</td>
<td>Length of a variable.</td>
</tr>
<tr>
<td>VARNAME</td>
<td>Name of a SAS data set variable.</td>
</tr>
<tr>
<td>VARNUM</td>
<td>Number of a variable’s position.</td>
</tr>
<tr>
<td>VARTYPE</td>
<td>Data type of a SAS data set variable (C or N)</td>
</tr>
</tbody>
</table>
Table 1: Variable Information Functions
One of the most commonly accessed metadata attributes is information about variable names and the existence of variables on a data set. The two functions that apply specifically to the variable name is VARNAME and VARNUM. Given
a variable number (position on the PDV), the VARNAME function returns the name of the variable. The VARNUM function is the opposite as it provides the position number given the variable name.
The macro %VAREXIST checks to see if a specific variable exists on a specified data set. The name of the variable of interest is passed to the VARNUM function, which in turn returns the variable’s position. If the variable does not exist on the data set the VARNUM function returns a 0.
The first argument of the VARNUM function is the data set identifier (obtained from the OPEN function), and the second is the name of the variable of interest. In this example the value returned by the VARNUM function is stored in the local macro variable &VNUM.
The macro %VAREXIST is written as a macro function. The value returned by VARNUM is passed out of the macro, and the macro is said to resolve to the returned value. This means that the macro call can be used in other statements, such as an %IF, to make decisions about further processing.
### Data Set Attribute Functions
When a data set is created or modified a number of attributes of the data set are stored in the metadata. These attributes include things like:
- when the data set was created
- how many variables it contains
- how many observations it contains (there can be more than one answer)
- data set size
- status of indexes, WHERE clauses, and passwords.
The two functions that return this type of attribute information are the ATTRC (returns character information) and ATTRN (returns numeric information) functions. Each of these functions can return a number of different attributes that can be selected by the user. These requests are made by specifying an attribute as the second argument to the function. Some of the available attribute request options are shown in Table 2.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Attribute Request</th>
<th>Returns the:</th>
</tr>
</thead>
<tbody>
<tr>
<td>ATTRC</td>
<td>COMPRESS</td>
<td>Compression status</td>
</tr>
<tr>
<td></td>
<td>ENGINE</td>
<td>Name of the engine used to create the data set</td>
</tr>
<tr>
<td></td>
<td>LABEL</td>
<td>Data set's label</td>
</tr>
<tr>
<td></td>
<td>LIB</td>
<td>Location of the data set, the Library (libref) name</td>
</tr>
<tr>
<td></td>
<td>MEM, DSNAME</td>
<td>Name of the data set</td>
</tr>
<tr>
<td></td>
<td>SORTEDBY</td>
<td>List of BY variables used to sort the data</td>
</tr>
<tr>
<td></td>
<td>TYPE</td>
<td>Data set type</td>
</tr>
<tr>
<td>ATTRN</td>
<td>CRDTE, MODTE</td>
<td>Datetime the data set was created or last modified</td>
</tr>
<tr>
<td></td>
<td>ISINDEX, INDEX</td>
<td>Status on indexes for this data set</td>
</tr>
</tbody>
</table>
%macro varexist(dsn=class,varname=age);
%local dsid vnum;
%let vnum=0;
%let dsid = %sysfunc(open(&dsn));
%if &dsid %then %do;
%let vnum = %sysfunc(varnum(&dsid,&varname));
%let dsid = %sysfunc(close(&dsid));
%end;
%vnum
%mend varexist;
The macro %VAREXIST is written as a macro function. The value returned by VARNUM is passed out of the macro, and the macro is said to resolve to the returned value. This means that the macro call can be used in other statements, such as an %IF, to make decisions about further processing.
%if %varexist(dsn=&dset,varname=&var) %then %do;
NDEL, NLOBS, NLOBSF, NOBS
Number of observations in the data set based on various conditions (primarily whether or not to count those marked for deletion)
NVAR
Number of variables in the data set
various
Read, write, and alter password status
Table 2: Data Set Attribute Functions
The use of the ATTRN function is demonstrated in the macro %VARLIST which creates a list of variable names of a specified type (numeric or character) for the data set of interest. The user passes in the data set name and whether or not to select a specific type of variable. The macro then returns the list of variables (or a blank if no variables meet the criteria).
The ATTRN function is used with the NVARS attribute to request the number of variables in the data set. The index (&I) for the %DO loop will then cycle from 1 to the number of variables.
The ATTRN function is used with the NVARS attribute to request the number of variables in the data set. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables. The index (&I) for the %DO loop will then cycle from 1 to the number of variables.
Because the number of observations in a data set is stored in the data set's metadata, the fastest way to determine the number of observations is to access the metadata directly using the macro language. The data set is opened and the ATTRN function is used with the NLOBS (number of non-deleted observations) to return the number of observations contained in the data set. The data set is opened. If the data set cannot be opened, this macro returns a dot (.) for the number of observations. The NLOBS attribute is used with the ATTRN function to return the number of non-deleted observations. This takes into account any observations that may have been marked for deletion during an interactive session using PROC FSEDIT or a similar tool. The data set is closed after retrieving the value of interest. If the data set is not available, a warning along with the reason (using the SYSMSG function) is written to the SAS Log. The number of observations is returned by the %OBSCNT macro.
Notice that all the macro variables created by the %OBSCNT macro are forced onto the local symbol table.
### READING DATA USING FUNCTIONS
Although not a common requirement, it is possible to both read and write data held in a SAS data set using the macro language. Once a data set has been opened, you can read observations both sequentially and using random access techniques.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Action:</th>
</tr>
</thead>
<tbody>
<tr>
<td>CUROBS</td>
<td>Current observation number</td>
</tr>
<tr>
<td>FETCH</td>
<td>Reads the next non-deleted observation from a SAS data set into the Data Set Data Vector (DDV)</td>
</tr>
<tr>
<td>FETCHOBS</td>
<td>Reads a specified observation from a SAS data set into the Data Set Data Vector (DDV)</td>
</tr>
<tr>
<td>GETVARN</td>
<td>Returns the value of a numeric variable</td>
</tr>
<tr>
<td>GETVARC</td>
<td>Returns the value of a character variable</td>
</tr>
<tr>
<td>NOTE/DROPNOTE</td>
<td>NOTE stores a unique observation ID number</td>
</tr>
<tr>
<td>POINT</td>
<td>Locates the observation identified by NOTE</td>
</tr>
<tr>
<td>REWIND</td>
<td>Returns the observation pointer to the beginning of the data set</td>
</tr>
<tr>
<td>CALL SET</td>
<td>Links data set variables to macro variables of the same name</td>
</tr>
</tbody>
</table>
Table 3: Data Access Functions
The macro `%SYMCHECK` (Carpenter, 2016, Exp. 9.2.1b) can be used to determine if a given macro variable is currently defined on a specific symbol table. The view SASHELP.VMACRO is used as input for the macro. As it is opened, a WHERE clause is used to limit the read to the specific row of interest.
A successful read indicates that the row exists, and the FETCH function returns a 0 for success.
&RC will contain a 0 if the specified macro variable does not exist, and a 1 if it does. Regardless this value is returned by the macro.
The `%M_ALL_DATA` macro shown next, can be used to mimic the functionality of the DATA step’s CALL EXECUTE routine. In this macro both the FETCHOBS function and the SET routine are used (Carpenter, 2016, Exp. 9.2.2d) to build the macro variables. In fact it creates a macro variable for each variable in the named data set, and these macro variables are then populated using the data in the data set.
A typically usage of this macro would be to call another macro at this point that would utilize the newly created observation specific macro variables.
For the first observation in the SASHELP.CLASS data set, the SAS Log shows that a local macro variable has been created for each variable in the incoming data set. The values of those macro variables correspond to the values of the data set variables.
The important thing to note in this example is that for each variable in the data set, a macro variable has been created with the same name and value as the variable. This can be very advantageous when there are a large number of macro variables that need to be created, and even more especially so if we do not necessarily know the names of the macro variables. This is demonstrated in the following example which calls the macro %SHOE_RPT once for each observation in a control file, and demonstrates the utility of this approach.
A control file is constructed with the desired attributes. In this case the two variables in the WORK.SHOE_RPT data set will become the parameters that will be used with a reporting macro named %SHOE_RPT. This macro depends on two macro variables (®ION and &PRODUCT).
To take advantage of the control data set (WORK.SHOE_RPT) the macro %M_ALL_DATA has been slightly modified at ➒. It will now call a macro with the same name as the name of the control data set (SHOE_RPT), and since it is called inside the %DO loop, it will be called once for each observation. For slippers sold in Asia the report to the left is generated.
In a blog Leonid Batkhan (Batkhan, 2016) uses the SET routine along with the FETCH function to read variable attributes.
WORKING WITH SAS FILES (LIBRARIES, DATA SETS, AND CATALOGS)
Creating and maintaining libraries of SAS files, like data sets and catalogs, can also be managed using functions. While these functions do not use the metadata of SAS data sets, they return information about the libraries – effectively metadata about the directory. These functions are often used in conjunction with those already described.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Action:</th>
</tr>
</thead>
<tbody>
<tr>
<td>CEXIST, EXIST</td>
<td>Whether or not the entity exists</td>
</tr>
<tr>
<td>LIBNAME</td>
<td>Establish a libref</td>
</tr>
<tr>
<td>LIBREF</td>
<td>Check for libref existence</td>
</tr>
<tr>
<td>PATHNAME</td>
<td>Return the physical path for a libref or fileref</td>
</tr>
<tr>
<td>RENAME</td>
<td>Rename a data set or catalog</td>
</tr>
</tbody>
</table>
Table 4: SAS File Functions
When you need to work with libraries of SAS files (data sets or catalogs) or with SAS files as entities, the functions in Table 4 can be of assistance. The LIBNAME and LIBREF functions can be used to establish or check the existence of a library. While the CEXIST and EXIST functions are used to determine whether or not a catalog or data set exists.
The PATHNAME function returns a physical path given a libref or a fileref. In this example a series of data sets are to be copied from a SAS data directory with a libref of PROJMETA into an Excel workbook using the PCFILES engine. Because we want to create the workbook in the same directory as the original data, the PATHNAME function is used to return the current path to the libref PROJMETA.
```sas
libname toxls pcfiles;
path="%sysfunc(pathname(projmeta))\MyExcelData.xls";
proc datasets nolist;
copy inlib=sashelp outlib=toxls;
select class heart prdsale;
quit;
libname toxls clear;
```
1. The PCFILES interpretation engine is selected for the new libref.
2. The PATHNAME function is used to return the physical path used by the PROJMETA libref.
3. The PROC DATASETS COPY statement is used to point to the incoming and outgoing librefs.
The SELECT statement specifies which data sets are to be copied.
The libref must be cleared before the new workbook can be used.
In the previous example, because an existing libref is used, we know that its associated folder also exists. Often we will be given a folder or path to use and we will need to either verify that it is valid or to create if it does not already exist. Given a path/location, the %CHECKLIB macro uses the LIBNAME and LIBREF functions to establish or clear a libref.
```sas
%macro checklib(libref=,libpath=);
%local rc;
%if &libpath= %then %do;
%* Clear this libref;
%let rc=%sysfunc(libname(&libref)); ➏
%end;
%else %if %sysfunc(libref(&libref)) ne 0 %then %do;
%* Establish this libref;
%let rc=%sysfunc(libname(&libref,&libpath)); ➐
%put %sysfunc(sysmsg());
%end;
%else %do;
%sysfunc(sysmsg()); ➑
%put WARNING: LIBREF not reassigned;
%end;
%mend checklib;
```
WORKING WITH DIRECTORIES
There are a number of functions available that have been designed to work with directories or folders in much the same way that the functions discussed earlier work with data sets. These functions can be used to create new folders as well as to read the names of the files within a folder.
Common approaches to working with folders include the use of the X statement or one of its equivalents (%SYSTASK, CALL SYSTEM, etc.). Because these techniques tend to be OS dependent, they require the programmer to understand the language/commands of the OS sub session. For the Windows OS the sub session, these are DOS commands. You can learn more about DOS commands by using the help command or by pairing help with the name of a command to get command specific help.
```
x help;
x help md;
```
Here the X statement is used to create a Windows directory (using the MD command), and to write a list of SAS programs and data sets to a text file (using the DIR command).
```
x md "c:\temp\test"; ➊
x dir "c:\temp\*.sas" /o:n /b > "c:\temp\test\pgmlist.txt"; ➋
```
➊ The MD command is used to create a directory.
➋ The DIR command is used to write the names of the files that contain SAS in the extension to the text file PGMLIST.TXT.
The DATA step’s file and directory functions allow us to accomplish the same basic types of tasks as the X statement without resorting to OS specific syntax and without stepping out of the macro language. Some of the more common functions of this type are shown in Table 5.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Action:</th>
</tr>
</thead>
<tbody>
<tr>
<td>DOPEN and DCLOSE</td>
<td>Open and close a directory</td>
</tr>
<tr>
<td>DCREATE</td>
<td>Create a directory or sub folder</td>
</tr>
<tr>
<td>DREAD</td>
<td>Read the names of items within a directory</td>
</tr>
<tr>
<td>DNUM</td>
<td>Returns the number of items in a directory</td>
</tr>
<tr>
<td>FILEEXIST</td>
<td>Checks existence of a file or directory</td>
</tr>
<tr>
<td>MOPEN</td>
<td>Open a file within a directory</td>
</tr>
</tbody>
</table>
Table 5: Directory and File Functions
The %CHECKLOC macro utilizes the FILEEXIST function to determine if a directory exists, and if it does not exist, the DCREATE function is used to create the directory. This macro is written as a macro function that returns the full path of the directory (whether it already exists or if it is created).
```sas
%macro CheckLoc(DirLoc=, DirName=); ③
%* if the directory does not exist, make it;
%if %sysfunc(fileexist("&dirloc\&dirname"))=0 %then %do;④
%put Create the directory: "&dirloc\&dirname";
%* Create the directory;
%sysfunc(dcreate(&dirname,&dirloc)) ⑤
%end;
%else %do;
%put The directory "&dirloc\&dirname" already exists;
&dirloc\&dirname ⑥
%end;
%mend checkloc;
```
③ The upper portion of the path along with the folder name is passed into the macro. ④ The FILEEXIST function is used to determine whether or not the specified folder already exists. This function returns a 0 when the folder does not already exist. ⑤ The DCREATE function can be used to make a directory. The first argument is the directory name and the second
is the upper portion of the path. Notice that the call to %SYSFUNC stands alone and is not a part of a complete statement. Because the DCREATE function returns the full path, this line will resolve to the full path, and it is this value that is passed out of the macro. ⑥ When the directory already exists, the full path is returned at this point in the macro.
⑦ Because %CHECKLOC is itself written as a function that returns an existing path, it can be used in a LIBNAME statement. Here %CHECKLOC establishes the path `c:\temp\test` (if it does not already exist), and it creates the libref TEMTEST.
When you want to access the names of the files in a directory you will need to open and close the directory using the DOPEN and DCLOSE functions. Once the directory is opened, you can use the DREAD and DNUM functions to step through the files in the directory. The %FILLIST macro writes the names of all the SAS programs in a directory to the SAS Log.
```sas
%macro fillist(filerf=);
%local rc fid i fname;
%let fid = %sysfunc(dopen(&filerf)); ❸
%if &fid %then %do i = 1 %to %sysfunc(dnum(&fid)); ❹
%let fname= %sysfunc(dread(&fid,&i)); ❫
%if %upcase(%qscan(&fname,-1,.))=SAS %then %put &fname;
%end;
%let fid = %sysfunc(dclose(&fid)); ⑪
%mend fillist;
filename saspgms 'c:\temp';
%fillist(filerf=saspgms)
filename saspgms clear;
```
The directory specified in the `filerf` passed into the macro is opened for use. The directory identification number is saved in the macro variable &FID. ➌ The DNUM function returns the number of files in the directory. ➍ The name of the &Ith file is returned by the DREAD function, and the names are written to the SAS Log. ❼ After using the directory, it is closed.
Establishing a process for each file in a directory would be a fairly straightforward expansion of the %FILLIST macro. Since the %DO loop will execute once for each file in the directory, any process that depends on the name of the file will have sufficient information.
In the next macro %FILLIST is expanded in a couple of important ways. The critical elements of %FILLIST remain, however in this macro, instead of just a %PUT, we include a process (to convert all CSV files in a directory to SAS data sets using PROC IMPORT). Not only do we need to convert the CSV files in the current directory, but in all subdirectories as well. This can be accomplished using a technique known as recursion.
A recursive macro is a macro that calls itself. The macro %FILLIST can be expanded to search for all files of a given type across a directory, including sub-directories, by making it recursive. The macro %RECURSIVEIMPORTDATA shown here is a simplified version of a macro written by Phuong Dinh of Cornerstone Research Inc. A similar, albeit even more simplistic, version of this macro appears in the SAS 9.4 Macro Language reference manual. Ostensibly this macro converts all CSV files in a folder (and subfolders) to SAS data sets and appends them into a single data set, however the important take away is that it uses recursion and a series of directory and file functions to search the sub-directories as well.
In this macro each file in a directory is examined by passing the macro the path of the directory of interest. If the file is a CSV file it is converted to a SAS data set, however if the file is a subdirectory, the macro is called again, this time with the path to the subdirectory. Because it is recursively called, this macro will crawl through an entire directory including all levels of subdirectories.
A unique fileref name is created. A level counter is used so that when the macro is called recursively the fileref created by the inner macro will not replace a fileref that already exists. Because a fileref is restricted to 8 characters, this macro cannot accommodate more than 1,000 levels (the highest level defaults to &LEVELCOUNTER=0).
The FILEEXIST function is used to check to make sure that the requested folder exists. If it does not exist a custom error message is written to the SAS Log and the macro terminates execution.
The FILENAME function is used to establish the fileref for this folder. Notice that although the name of the fileref is stored in &_REF, the ampersand is not used with the FILENAME function.
Assuming that the fileref (&_REF) is successfully established (&_RC=0), the folder designated by the fileref &_REF is opened for processing. The folder’s identification number is saved in &_DSID.
When the directory is successfully opened (%_DSID ne 0), a %DO loop is used to step through all of the files in the current folder. The number of files to be processed is returned by the DNUM function. The index of the %DO loop (&_I) will be used by the DREAD function.
The DREAD function is used to read the name of the &_I\textsuperscript{th} file. The file’s name is stored in the macro variable &\_FILENAME.
The extension of the current file name is extracted using the %SCAN function. If there is no extension the name of the file will be returned. This macro assumes that all files except folders have extensions.
The file name has an extension of CSV. Import the CSV file, create a data set and append it to the growing data table.
This file does not have an extension and is therefore assumed to be a sub-directory. The macro %RECURSIVEIMPORTDATA is called with the name of the subfolder and with a level indicator increased by 1.
The directory is closed using the DCLOSE function.
The fileref for this folder is cleared. Remember that the macro variable &\_REF is specified without the ampersand in the FILENAME function.
**RELATED FUNCTIONS AND OTHER TOOLS**
There are a number of less commonly used directory functions. Remember less commonly used does not necessarily mean less useful. These are most useful when you want to handle the directory as an entity.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Action:</th>
</tr>
</thead>
<tbody>
<tr>
<td>DINFO</td>
<td>Returns information about a directory</td>
</tr>
<tr>
<td>DOPTNAME</td>
<td>Returns directory attribute information</td>
</tr>
<tr>
<td>DOPTNUM</td>
<td>Returns the number of attribute items</td>
</tr>
</tbody>
</table>
Table 6: Other Directory Functions
The macro %DIRINFO can be used to show the directory information returned by the functions in Table 6. It has been my experience that, under Windows at least, only a limited amount of information is returned. It is possible that other operating systems and directory structures will return more useful information. Experiment - execute %DIRINFO on your OS.
Much like the functions that can be used to read and write data in a SAS data set, there are also a number of similar functions that can be used to read and write information to and from non-SAS controlled files. Some of those functions are shown in Table 7.
<table>
<thead>
<tr>
<th>Function Name</th>
<th>Action:</th>
</tr>
</thead>
<tbody>
<tr>
<td>FOPEN and FCLOSE</td>
<td>Opens and closes a specific file</td>
</tr>
<tr>
<td>FREAD</td>
<td>Reads a row into the File Data Buffer (FDB)</td>
</tr>
<tr>
<td>FAPPEND</td>
<td>Appends the current record to an existing file</td>
</tr>
<tr>
<td>FCOL</td>
<td>Current position on the FDB</td>
</tr>
<tr>
<td>FGET</td>
<td>Retrieves item from the FDB</td>
</tr>
<tr>
<td>FWRITE</td>
<td>Writes a record to an external file</td>
</tr>
<tr>
<td>FDELETE</td>
<td>Deletes an external file</td>
</tr>
<tr>
<td>FPOINT</td>
<td>Contains number of the next row to read</td>
</tr>
</tbody>
</table>
Table 7: File I/O Functions
**SUMMARY**
There are a number of SAS DATA step functions that you will likely never use in the DATA step. However when paired with the macro language through the use of the %SYSFUNC macro function, these DATA step functions have extensive utility. Functions that are designed to work with the metadata of SAS data sets and OS folders are especially valuable. These functions allow us to retrieve, manipulate, and use information that in many cases would be otherwise unavailable to us; and we do not need to leave the macro language in order to take advantage of them!
**ABOUT THE AUTHOR**
Art Carpenter is a SAS Certified Advanced Professional Programmer and his publications list includes; five books and numerous papers and posters presented at SAS Global Forum, SUGI, PharmaSUG, WUSS, and other regional conferences. Art has been using SAS since 1977 and has served in various leadership positions in local, regional, and international user groups.
Recent publications are listed on my sasCommunity.org Presentation Index page.
http://sascommunity.org/wiki/Presentations:ArtCarpenter_Papers_and_Presentations
**AUTHOR CONTACT**
Art L. Carpenter
California Occidental Consultants
10606 Ketch Circle
Anchorage, AK 99515
(907) 865-9167
art@caloxy.com
www.caloxy.com
REFERENCES
There are a number of nice examples in the SAS 9.4 Macro Language Reference manual that have to do with reading files within directories, start here: http://support.sas.com/documentation/cdl/en/mcrolref/69726/HTML/default/viewer.htm#n02xowij8yuqfo4n0zzi98shu8qup.htm
TRADEMARK INFORMATION
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries.
® indicates USA registration.
Other brand and product names are trademarks of their respective companies.
|
{"Source-Url": "https://www.lexjansen.com/pharmasug/2017/TT/PharmaSUG-2017-TT02.pdf", "len_cl100k_base": 8648, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40038, "total-output-tokens": 9243, "length": "2e13", "weborganizer": {"__label__adult": 0.00024437904357910156, "__label__art_design": 0.0006403923034667969, "__label__crime_law": 0.00026154518127441406, "__label__education_jobs": 0.0024433135986328125, "__label__entertainment": 0.00012117624282836914, "__label__fashion_beauty": 0.00012385845184326172, "__label__finance_business": 0.0008654594421386719, "__label__food_dining": 0.00024056434631347656, "__label__games": 0.00047087669372558594, "__label__hardware": 0.0008745193481445312, "__label__health": 0.0002315044403076172, "__label__history": 0.0002570152282714844, "__label__home_hobbies": 0.000152587890625, "__label__industrial": 0.00045871734619140625, "__label__literature": 0.0002601146697998047, "__label__politics": 0.00019550323486328125, "__label__religion": 0.0002837181091308594, "__label__science_tech": 0.031494140625, "__label__social_life": 0.00012302398681640625, "__label__software": 0.1646728515625, "__label__software_dev": 0.794921875, "__label__sports_fitness": 0.0001360177993774414, "__label__transportation": 0.00024962425231933594, "__label__travel": 0.0001964569091796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36690, 0.00858]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36690, 0.71812]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36690, 0.84993]], "google_gemma-3-12b-it_contains_pii": [[0, 2835, false], [2835, 4890, null], [4890, 6557, null], [6557, 9547, null], [9547, 12730, null], [12730, 15253, null], [15253, 17632, null], [17632, 18723, null], [18723, 20261, null], [20261, 22361, null], [22361, 24419, null], [24419, 26768, null], [26768, 29705, null], [29705, 30629, null], [30629, 32688, null], [32688, 35025, null], [35025, 36690, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2835, true], [2835, 4890, null], [4890, 6557, null], [6557, 9547, null], [9547, 12730, null], [12730, 15253, null], [15253, 17632, null], [17632, 18723, null], [18723, 20261, null], [20261, 22361, null], [22361, 24419, null], [24419, 26768, null], [26768, 29705, null], [29705, 30629, null], [30629, 32688, null], [32688, 35025, null], [35025, 36690, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36690, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36690, null]], "pdf_page_numbers": [[0, 2835, 1], [2835, 4890, 2], [4890, 6557, 3], [6557, 9547, 4], [9547, 12730, 5], [12730, 15253, 6], [15253, 17632, 7], [17632, 18723, 8], [18723, 20261, 9], [20261, 22361, 10], [22361, 24419, 11], [24419, 26768, 12], [26768, 29705, 13], [29705, 30629, 14], [30629, 32688, 15], [32688, 35025, 16], [35025, 36690, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36690, 0.21181]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.